Beruflich Dokumente
Kultur Dokumente
html
1 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
2 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
3 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
@DAY The day of the month extracted from the value in @DATE.
@INROWNUM Input row counter. For use in constrains and derivations in Transformer
stages.
@OUTROWNUM Output row counter (per link). For use in derivations in Transformer
stages.
@SYSTEM.RETURN.CODE
Status codes returned by system processes or commands.
@TIME The internal time when the program started. See the Time function.
4 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
5 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
6 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
7 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
8 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
are several methods of partition like Hash, DB2, Random etc.While using Hash
partition we specify the Partition Key.
81. How do you execute datastage job from command line prompt?
Using "dsjob" command as follows. dsjob -run -job status projectname jobname
82. W hat are Stage Variables, Derivations and Constants?
Stage Variable - An intermediate processing variable that retains value during read and
doesnt pass the value into target column. Derivation - Expression that specifies value
to be passed on to the target column. Constant - Conditions that are either
83. W hat is the default cache size? How do you change the cache size if needed?
Default cache size is 256 MB. We can increase it by going into Datastage
Administrator and selecting the Tunable Tab and specify the cache size over there.
84. Containers: Usage and Types?
Container is a collection of stages used for the purpose of Reusability. There are 2
types of Containers. a) Local Container: Job Specific b) Shared Container: Used in
any job within a project.
85. Compare and Contrast ODBC and Plug-In stages?
ODBC: a) Poor Performance. b) Can be used for Variety of Databases. c) Can handle
Stored Procedures. Plug-In: a) Good Performance. b) Database specific.(Only one
database) c) Cannot handle Stored Procedures.
86. How to run a Shell Script within the scope of a Data stage job?
By using "ExcecSH" command at Before/After job properties.
87. Types of Parallel Processing?
Parallel Processing is broadly classified into 2 types. a) SMP - Symmetrical Multi
Processing. b) MPP - Massive Parallel Processing.
88. W hat does a Config File in parallel extender consist of?
Config file consists of the following. a) Number of Processes or Nodes. b) Actual Disk
Storage Location.
89. Functionality of Link Partitioner and Link Collector?
Link Partitioner: It actually splits data into various partitions or data flows using various
partition methods. Link
Collector: It collects the data coming from partitions, merges it into a single data flow
and loads to target.
90. W hat is Modulus and Splitting in Dynamic Hashed File?
In a Hashed File, the size of the file keeps changing randomly. If the size of the file
increases it is called as "Modulus". If the size of the file decreases it is called as
"Splitting".
91. Types of vies in Datastage Director?
There are 3 types of views in Datastage Director a) Job View - Dates of Jobs
Compiled. b) Log View - Status of Job last run c) Status View - Warning Messages,
Event Messages, and Program Generated Messages.
9 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
What is the architecture of any Data warehousing project? What is the flow?
1) The basic step of data warehousing starts with datamodelling. i.e. creation of
dimensions and facts.
2) data warehouse starts with collection of data from source systems such as
OLTP,CRM,ERPs etc
3) Cleansing and transformation process is done with ETL(Extraction Transformation
Loading)?tool.
4) by the end of ETL process target databases(dimensions,facts) are ready with data
which accomplishes the business rules.
5) Now finally with the use of Reporting tools (OLAP) we can get the information which is
used for decision support.
10 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
Whereas hierarchies are broken into separate tables in snow flake schema. These
hierarchies help to drill down the data from topmost hierarchies to the lowermost
hierarchies.
What is fact less fact table? Where you have used it in your project?
Fact less Fact Table contains nothing but dimensional keys. It is used to support negative
analysis report. For example a Store that did not sell a product for a given period.
What is snapshot?
Snapshot is static data source; it is permanent local copy or picture of a report,
it is suitable for disconnected networks. we can’t add any columns to sanpshot. we can
11 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
sort, grouping and aggregations and it is mainly used for analyzing the historical data.
What is cube and why we are crating a cube what is diff between ETL and OLAP cubes?
Any schema or Table or Report which gives you meaningful information Of One attribute
wrt more than one attribute is called a cube. For Ex: In a product table with Product ID and
Sales colomns, we can analyze Sales wrt to Prodcut Name, but if you analyze Sales wrt
Product as well as Region( region being attribute in Location Table) the report or Resultant
table or schema would be Cube.
ETL Cubes: Built in the staging area to load frequently accessed reports to the target.
Reporting Cubes: Built after the actual load of all the tables to the target depending on the
customer requirement for his business analysis.
12 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
What is ODS?
ODS stands for Online Data Storage.
what is the need of surrogate key; why primary key not used as surrogate key?
Surrogate Key is an artificial identifier for an entity. In surrogate key values are generated
by the system sequentially (Like Identity property in SQL Server and Sequence in Oracle).
They do not describe anything.
Primary Key is a natural identifier for an entity. In Primary keys all the values are entered
manually by the users which are uniquely identified. There will be no repetition of data.
13 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
14 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
what ever changes done in source for each and every record there is a new entry in target
side, whether it may be UPDATE or INSERT and in target mentaining the history.
Let me give an example to make the point clear….
Like account information is usually maintained in two categories:
Current Account and other is Time of Event Account i.e We have two set of tables eg
CUR_ACCT this is fast moving dimension containing information like Balance etc , while the
other is TOE_ACCT table this contains information like Contact Details, Phone No where
history is not only important but considered to be changing slowly.
With?this respect TOE_ACCT table qualiefies as slowly changing dimension.
Difference between Snow flake and Star Schema. What are situations where Snow flake Schema is
better than Star Schema to use and when the opposite is true?
Star Schema means: A centralized fact table and surrounded by different dimensions.
Snowflake means: In the same star schema dimensions split into another dimensions.
Star Schema contains Highly Demoralized Data.
Snow flake: contains partially normalized
Star can not have parent table
But snow flake contain parent tables
Why need to go there Star:
Here 1)less joiners contains
2)simply database
3)support drilling up options
Why need to go Snowflake schema:
Here some times we used to provide?seperate dimensions from existing dimensions that
time we will go to snowflake
Disadvantage Of snowflake:
Query performance is very low because more joiners is there
15 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
What is the main difference between Inmon and Kimball philosophies of data warehousing?
Both differed in the concept of building the datawarehosue..
According to Kimball …
Kimball views data warehousing as a constituency of data marts. Data marts are focused
on delivering business objectives for departments in the organization. And the data
warehouse is a conformed dimension of the data marts.
Hence a unified view of the enterprise can be obtained from the dimension modeling on a
local departmental level.
Kimball–FirstDataMarts-Combined way-Datawarehouse.
Inmon beliefs in creating a data warehouse on a subject-by-subject area basis. Hence the
development of the data warehouse can start with data from the online store. Other
subject areas can be added to the data warehouse as their needs arise. Point-of-sale
(POS) data can be added later if management decides it is necessary.
Inmon-First Datawarehouse-Later-Datamarts.
16 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
what r the data types present in BO?n wht happens if we implement view in the designer n report
n my knowlegde, these are?called as object types in the Business Objects.And alias is
different from view in the universe. View is at database level, but alias?is a different name
given for the same table to resolve the loops in universe.
The different data types in business objects are:1. Character.2. Date.3. Long text.4.
Number
17 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
standard process for doing so. Some methodologies, such as IDEFIX, specify a bottom-up
What is Normalization, First Normal Form, Second Normal Form , Third Normal Form?
Normalization can be defined as segregating of table into two different tables, so as to
avoid duplication of values.?
What are Semi-additive and factless facts and in which scenario will you use such kinds of fact
tables?
Semi-Additive: Semi-additive facts are facts that can be summed up for some of the
dimensions in the fact table, but not the others. For example:
Current_Balance and Profit_Margin are the facts. Current_Balance is a semi-additive fact,
as it makes sense to add them up for all accounts (what’s the total current balance for all
accounts in the bank?), but it does not make sense to add them up through time (adding
up all current balances for a given account for each day of the month does not give us any
useful information
A factless fact table captures the many-to-many relationships between
dimensions, but contains no numeric or textual facts. They are often used to record events
or
coverage information. Common examples of factless fact tables include:
- Identifying product promotion events (to determine promoted products that did not sell).
- Tracking student attendance or registration events
- Tracking insurance-related accident events
- Identifying building, facility, and equipment schedules for a hospital or university
18 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
They can be used as primary key for the fact table but they cannot act as foreign keys.
What is VLDB?
Very Large Database (VLDB)
it is sometimes used to describe databases occupying magnetic storage in the terabyte
range and containing billions of table rows. Typically, these are decision support systems
or transaction processing applications serving large numbers of users.
What is ETL?
ETL is an abbreviation for “Extract, Transform and Load”.This is the process of extracting
data from their operational data sources or external data sources, transforming the data
which includes cleansing, aggregation, summarization, integration, as well as basic
transformation and loading the data into some form of the data warehouse.
What is the definition of normalized and renormalized view and what are the differences between
them?
I would like to add one more pt. here, as OLTP is in Normalized?form, more no. of tables
are?scanned or referred for a single query,?as through primary key and foreign key data
needs to be fetched from its respective Master tables. Whereas in OLAP, as the data is in
De-normailzed form, for a?query?the no. of tables queried?are less.For eq.:- If we have a
banking appln., in OLTP env., we will have a separate table for customer personal details ,
Address details,?its transaction details etc..Whereas in OLAP env. these all details can be
stored in one sinlge table thus decreasing the scanning of multiple tables for a single
record of a customer details.
19 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
Data Mart: a data mart is a?small data warehouse. In general, a data warehouse
is?divided into small units according the busness requirements. for example, if we take a
Data Warehouse of an organization, then it may be divided into the following individual
Data Marts. Data Marts are used to improve the performance during the retrieval of data.
eg:??Data Mart of Sales, Data Mart?of Finance, Data Mart of Maketing, Data Mart of HR
etc.
What are data validation strategies for data mart validation after loading process?
Data validation is to make sure that the loaded data is accurate and meets the business
requirements.
20 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
Why OLTP database are designs not generally a good idea for a Data Warehouse
OLTP cannot store historical information about the organization. It is used for storing the
details of daily transactions while a data warehouse is a huge storage of historical
information obtained from different data marts for making intelligent decisions about the
organization.
Which columns go to the fact table and which columns go the dimension table?
The Aggregation or calculated value columns will go to Fact Table and details information
will go to dimensional table.
To add on, Foreign key elements along with Business Measures, such as Sales in $ amt,
Date may be a business measure in some case, units (qty sold) may be a business
measure, are stored in the fact table. It also depends on the granularity at which the data
is stored.
SCDType 3, attributes are added to the dimension table to support two simultaneous
roll-ups - perhaps the current product roll-up as well as? Current version minus one? or
current version and original.
21 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
data warehousing. Real-time activity is activity that is happening right now. The activity
could be anything such as the sale of widgets. Once the activity is complete, there is data
about it.
Data warehousing captures business activity data. Real-time data warehousing captures
business activity data as it occurs. As soon as the business activity is complete and there
is data about it, the completed activity data flows into the data warehouse and becomes
available instantly. In other words, real-time data warehousing is a framework for deriving
information from data as the data becomes available.
What is ER Diagram?
The Entity-Relationship (ER) model was originally proposed by Peter in 1976 [Chen76] as
a way to unify the network and relational database views.
Why should you put your data warehouse on a different system than your OLTP system?
Data Warehouse is a part of OLAP (On-Line Analytical Processing). It is the source from
which any BI tools fetch data for Analytical, reporting or data mining purposes. It generally
contains the data through the whole life cycle of the company/product. DWH contains
historical, integrated, Denormalized, subject oriented data.
However, on the other hand the OLTP system contains data that is generally limited to last
couple of months or a year at most. The nature of data in OLTP is: current, volatile and
highly normalized. Since, both systems are different in nature and functionality we should
always keep them in different systems.
Explain the advantages of RAID 1, 1/0, and 5. What type of RAID setup would you put your TX logs
Raid 0 - Make several physical hard drives look like one hard drive. No redundancy but
very fast. May use for temporary spaces where loss of the files will not result in loss of
committed data.
Raid 1- Mirroring. Each hard drive in the drive array has a twin. Each twin has an exact
copy of the other twins data so if one hard drive fails, the other is used to pull the data.
Raid 1 is half the speed of Raid 0 and the read and write performance are good.
Raid 1/0 - Striped Raid 0, then mirrored Raid 1. Similar to Raid 1. Sometimes faster than
Raid 1. Depends on vendor implementation.
Raid 5 - Great for readonly systems. Write performance is 1/3rd that of Raid 1 but Read
is same as Raid 1. Raid 5 is great for DW but not good for OLTP.
22 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
Does u need separate space for Data warehouse & Data mart?
In the dataware house all the information of the enterprise is there but the data mart is
specific for the particular analysis like sales,production ….,,, so data mart is subject
oriented and warehouse is nothing but collection of datamarts so we assume it also
subject oriented bcz it’s collection of data marts … so for individual analysis we need
datamarts.
1. W hat are other Performance tunings you have done in your last project to
increase the performance of slowly running jobs?
1) Minimize the usage of Transformer (Instead of this use Copy, modify, Filter, Row
Generator)
2) Use SQL Code while extracting the data Handle the nulls, Minimize the warnings
3) Reduce the number of lookups in a job design Use not more than 20stages in a job
4)Use IPC stage between two passive stages to Reduces processing time
5)Drop indexes before data loading and recreate after loading data into tables
23 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
6) There is no limit for no of stages like 20 or 30 but we can break the job into small jobs
then we use dataset Stages to store the data.
7) Check the write cache of Hash file. If the same hash file is used for Look up and as well
as target, disable this Option. If the hash file is used only for lookup then "enable Preload
to memory". This will improve the performance. Also, check the order of execution of the
routines.
8) Don't use more than 7 lookups in the same transformer; introduce new transformers
if it exceeds 7 lookups.
9) Use Preload to memory option in the hash file output.
10) Use Write to cache in the hash file input.
11) Write into the error tables only after all the transformer stages.
12) Reduce the width of the input record - remove the columns that you would not use.
13) Cache the hash files you are reading from and writing into. Make sure your cache is
big enough to hold the hash files.
(Use ANALYZE.FILE or HASH.HELP to determine the optimal settings for your hash files.)
This would also minimize overflow on the hash file.
14) If possible, break the input into multiple threads and run multiple instances of the job.
15) Staged the data coming from ODBC/OCI/DB2UDB stages for optimum performance
also for data recovery in case job aborts.
16) Tuned the OCI stage for 'Array Size' and 'Rows per Transaction' numerical values for
faster inserts, updates and selects.
17) Tuned the 'Project Tunables' in Administrator for better performance.
18) Sorted the data as much as possible in DB and reduced the use of DS-Sort for better
performance of jobs. Used sorted data for Aggregator.
19) Removed the data not used from the source as early as possible in the job.
20) Worked with DB-admin to create appropriate Indexes on tables for better performance
of DS queries
21) Converted some of the complex joins/business in DS to Stored Procedures on DS for
faster execution of the jobs.
22) If an input file has an excessive number of rows and can be split-up then use standard
logic to run jobs in parallel.
23) Constraints are generally CPU intensive and take a significant amount of time to
process. This may be the case if the constraint calls routines or external macros but if it is
inline code then the overhead will be minimal.
24) Try to have the constraints in the 'Selection' criteria of the jobs itself. This will eliminate
the unnecessary records even getting in before joins are made.
25) Tuning should occur on a job-by-job basis.
26) Using a constraint to filter a record set is much slower than performing a SELECT …
WHERE….
27) Make every attempt to use the bulk loader for your particular database. Bulk loaders
are generally faster than using ODBC or OCI.
2. How can I extract data from DB2 (on IBM i-series) to the data warehouse via
Datastage as the ETL tool? I mean do I first need to use ODBC to create
connectivity and use an adapter for the extraction and transformation of data?
You would need to install ODBC drivers to connect to DB2 instance (does not come with
regular drivers that we try to install, use CD provided for DB2 installation, that would have
ODBC drivers to connect to DB2) and then try out
You use the Designer to build jobs by creating a visual design that models the flow and
transformation of data from the data source to the target warehouse. The Designer
graphical interface lets you select stage icons, drop them onto the Designer work area,
and add links.
24 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
You can do like this, by passing parameters from UNIX file, and then calling the execution
of a Datastage job. The ds job has the parameters defined. Which are passed by UNIX
You always enter Datastage through a Datastage project. When you start a Datastage
client you are prompted to connect to a project.
Yes, we can do it in an indirect way. First create a job which can populate the data from
database into a Sequential file and name it as Seq_First1. Take the flat file which you are
a having and use Merge Stage to join the two files. You have various join types in Merge
Stage like Pure Inner Join, Left Outer Join, Right Outer Join etc., You can use any one of
these which suits your requirements.
10. Can any one tell me how to extract data from more than 1 heterogeneous
Sources? Means, example 1 sequential file, Sybase, Oracle in a single Job.
Yes you can extract the data from two heterogeneous sources in data stages using the
transformer stage it's so simple you need to just form a link between the two sources in
the transformer stage.
11. W ill Datastage consider the second constraint in the transformer if the first
constraint is satisfied (if link ordering is given)?"
Answer: Yes.
12. How we use NLS function in Datastage? W hat are advantages of NLS function?
W here we can use that one? Explain briefly?
13. If a Datastage job aborts after say 1000 records, how to continue the job from
25 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
By specifying Check pointing in job sequence properties, if we restart the job. Then job will
start by skipping upto the failed record. this option is available in 7.5 edition.
14. How to kill the job in data stage? ANS by killing the respective process ID
Basically Environment variable is predefined variable those we can use while creating DS
job. We can set either as Project level or Job level. Once we set specific variable that
variable will be available into the project/job.
We can also define new environment variable. For that we can go to DS Admin.
16. W hat are all the third party tools used in Datastage?
Autosys, TNG, event coordinator are some of them that I know and worked with
APT_CONFIG is just an environment variable used to identify the *.apt file. Don’t confuse
that with *.apt file that has the node's information and Configuration of SMP/MMP server.
17. If you’re running 4 ways parallel and you have 10 stages on the canvas, how
many processes does Datastage create?
Answer is 40
you have 10 stages and each stage can be partitioned and run on 4 nodes which makes
total number of processes generated are 40
18. Did you Parameterize the job or hard-coded the values in the jobs?
Always parameterized the job. Either the values are coming from Job Properties or from a
‘Parameter Manager’ – a third part tool. There is no way you will hard–code some
parameters in your jobs. The often Parameterized variables in a job are: DB DSN name,
username, and password.
Actually the Number of Nodes depends on the number of processors in your system. If
your system is supporting two processors we will get two nodes by default.
No, it is not possible to run Parallel jobs in server jobs. But Server jobs can be executed in
Parallel jobs
21. It is possible to access the same job two users at a time in Datastage?
No, it is not possible to access the same job two users at the same time. DS will produce
the following error: "Job is accessed by other user"
MetaStage is used to handle the Metadata which will be very useful for data linkage and
data analysis later on. Meta Data defines the type of data we are handling. This Data
Definitions are stored in repository and can be accessed with the use of MetaStage.
23. W hat is merge and how it can be done plz explain with simple example taking 2
tables
26 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
Merge is used to join two tables. It takes the Key columns sort them in Ascending or
descending order. Let us consider two table i.e. Emp,Dept.If we want to join these two
tables we are having DeptNo as a common Key so we can give that column name as key
and sort DeptNo in ascending order and can join those two tables
25. W hat are the enhancements made in Datastage 7.5 compare with 7.0
Many new stages were introduced compared to Datastage version 7.0. In server jobs we
have stored procedure stage, command stage and generate report option was there in file
tab.
In job sequence many stages like start loop activity, end loop activity, terminates loop
activity and user variables activities were introduced.
In parallel jobs surrogate key stage, stored procedure stages were introduced.
26. How can we join one Oracle source and Sequential file?.
The stages followed by exception activity will be executed whenever there is an unknown
error occurs while running the job sequencer.
The main difference is Vendors? Each one is having plus from their architecture. For
Datastage it is a Top-Down approach. Based on the Business needs we have to choose
products.
31. What are Static Hash files and Dynamic Hash files?
The hashed files have the default size established by their modulus and separation when
you create them, and this can be static or dynamic.
Overflow space is only used when data grows over the reserved size for someone of
the groups (sectors) within the file. There are many groups as the specified by the
modulus.
32. W hat is the exact difference between Join, Merge and Lookup Stage?
27 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
The Duplicates can be eliminated by loading the corresponding data in the Hash file.
Specify the columns on which u want to eliminate as the keys of hash.
The different hashing algorithms are designed to distribute records evenly among the
groups of the file based on characters and their position in the record ids.
When a hashed file is created, Separation and modulo respectively specifies the group
buffer size and the number of buffers allocated for a file. When a Static Hash file is
created, DATASTAGE creates a file that contains the number of groups specified by
modulo.
Size of Hash file = modulus (no. groups) * Separations (buffer size)
The concept of surrogate comes into play when there is slowly changing dimension in a
table.
In such condition there is a need of a key by which we can identify the changes made in
the dimensions.
These are system generated key. Mainly they are just the sequence of numbers or can be
Alfa numeric values also.
These slowly changing dimensions can be of three types namely SCD1, SCD2, and SCD3.
We can call Datastage Batch Job from Command prompt using 'dsjob'. We can also pass
all the parameters from command prompt.
Then call this shell script in any of the market available schedulers.
The 2nd option is schedule these jobs using Data Stage director.
We can also use the Hash File stage to avoid / remove duplicate rows by specifying the
hash key on a particular field
28 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
Version Control stores different versions of DS jobs. Runs different versions of same job,
reverts to previous version of a job also view version histories
41. Suppose if there are million records did you use OCI? If not then what stage do
you prefer?
Using Orabulk
How do you pass the parameter to the job sequence if the job is running at night?
Two ways:
1. Set the default values of Parameters in the Job Sequencer and map these parameters
to job.
2. Run the job in the sequencer using dsjobs utility where we can specify the values to be
taken for each parameter.
W hat is the transaction size and array size in OCI stage? How these can be used?
Transaction Size - This field exists for backward compatibility, but it is ignored for
release 3.0 and later of the Plug-in. The transaction size for new jobs is now handled by
Rows per transaction on the Transaction Handling tab on the Input page.
Rows per transaction - The number of rows written before a commit is executed for the
transaction. The default value is 0, that is, all the rows are written before being committed
to the data table.
Array Size - The number of rows written to or read from the database at a time. The
default value is 1, that is, each row is written in a separate statement.
W hat is the difference between DRS (Dynamic Relational Stage) and ODBC
STAGE?
To answer your question the DRS stage should be faster then the ODBC stage as it
uses native database connectivity. You will need to install and configure the required
database clients on your Datastage server for it to work.
Dynamic Relational Stage was leveraged for People soft to have a job to run on any of
the supported databases. It supports ODBC connections too. Read more of that in the
plug-in documentation.
ODBC uses the ODBC driver for a particular database, DRS (Dynamic Relational
stage) is a stage that tries to make it seamless for switching from one database to
another. It uses the native connectivity’s for the chosen target ...
29 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
W hat is the mean of Try to have the constraints in the 'Selection' criteria of the
jobs itself? This will eliminate the unnecessary records even getting in before joins
are made?
This means try to improve the performance by avoiding use of constraints wherever
possible and instead using them while selecting the data itself using a where clause. This
improves performance.
How to drop the index befor loading data in target and how to rebuild it in data stage?
The Administrator enables you to set up Datastage users, control the purging of the
Repository, and, if National Language Support (NLS) is enabled, install and manage maps
and locales.
30 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
W hat is the order of execution done internally in the transformer with the stage
editor having input links on the left hand side and output links?
Stage variables, constraints and column derivation or expressions
Slow changing dimension is a common problem in Dataware housing. For example: There
exists a customer called lisa in a company ABC and she lives in New York. Later she
moved to Florida. The company must modify her address now. In general 3 ways to solve
this problem
Type 1: The new record replaces the original record, no trace of the old record at all
Type 2: A new record is added into the customer dimension table. Therefore, the
customer is treated essentially as two different people.
Type 3: The original record is modified to reflect the changes.
In Type1 the new one will over write the existing one that means no history is maintained,
History of the person where she stayed last is lost, simple to use.
In Type2 New record is added, therefore both the original and the new record Will be
present, the new record will get its own primary key, Advantage of using this type2 is,
Historical information is maintained But size of the dimension table grows, storage and
performance can become a concern.
Type2 should only be used if it is necessary for the data warehouse to track the historical
changes.
In Type3 there will be 2 columns one to indicate the original value and the other to indicate
the current value. Example a new column will be added which shows the original address
as New York and the current address as Florida. Helps in keeping some part of the history
and table size is not increased. But one problem is when the customer moves from Florida
to Texas the New York information is lost. so Type 3 should only be used if the changes
will only occur for a finite number of time.
server jobs mainly execute the jobs in sequential fashion, the ipc stage as well as link
partioner and link collector will simulate the parallel mode of execution over the server jobs
having single cpu Link Partitioner: It receives data on a single input link and diverts the data
to a maximum no. of 64 output links and the data processed by the same stage having
same meta data Link Collector: It will collects the data from 64 input links, merges it into a
single data flow and loads to target. These both r active stages and the design and mode
of execution of server jobs has to be decided by the designer
JCL defines Job Control Language it is used to run more number of jobs at a time with or
without using loops.
31 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
steps: click on edit in the menu bar and select 'job properties' and enter the parameters as
parameter prompt typeSTEP_ID STEP_ID string Source SRC stringDSN DSN string
Username unm string Password pwd stringafter editing the above steps then set JCL
button and select the jobs from the list box and run the job
Its a critical question to answer, but one thing i can tell u that Datastage Tx is not a ETL
tool & this is not a new version of Datastage 7.5.Tx is used for ODS source ,this much I
know
If the size of the Hash file exceeds 2GB...W hat happens? Does it overwrite the
current rows?
How much would be the size of the database in Datastage? W hat is the difference
between In process and Interprocess?
In-process:
You can improve the performance of most DataStage jobs by turning in-process row
buffering on and recompiling the job. This allows connected active stages to pass data via
buffers rather than row by row.
Note: You cannot use in-process row-buffering if your job uses COMMON blocks in
transform functions to pass data between stages. This is not recommended practice, and
it is advisable to redesign your job to use row buffering rather than COMMON blocks.
Inter-process
Use this if you are running server jobs on an SMP parallel system. This enables the job to
run using a separate process for each active stage, which will run simultaneously on a
separate processor.
Note: You cannot inter-process row-buffering if your job uses COMMON blocks in
transform functions to pass data between stages. This is not recommended practice, and
it is advisable to redesign your job to use row buffering rather than COMMON blocks.
How can you do incremental load in Datastage? Incremental load means daily load.
When ever you are selecting data from source, select the records which are loaded or
updated between the timestamp of last successful load and today’s load start date and
time. For this u have to pass parameters for those two dates.
Store the last run date and time in a file and read the parameter through job parameters
and state second argument as current date and time.
In the target make the column as the key column and run the job.
What r XML files and how do you read data from XML files and what stage to be used?
In the pallet there is a Real time stage like xml-input, xml-output, xml-transformer
32 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
Flat files stores the data and the path can be given in general tab of the sequential file
stage
File set:- It allows you to read data from or write data to a file set. The stage can have a
single input link. A single output link and a single rejects link. It only executes in parallel
mode the data files and the file that lists them are called a file set. This capability is useful
because some operating systems impose a 2 GB limit on the size of a file and you need to
distribute files among nodes to prevent overruns.
Datasets r used to import the data in parallel jobs like odbc in server jobs
what is meaning of file extender in data stage server jobs. can we run the data
stage job from one job to another job?
File extender means the adding the columns or records to the already existing the file, in
the data stage, we can run the data stage job from one job to another job in data stage.
Either used Copy command as a Before-job subroutine if the metadata of the 2 files are
same or created a job to concatenate the 2 files into one if the metadata is different.
W hat is the default cache size? How do you change the cache size if needed?
Default read cache size is 128MB. We can increase it by going into Datastage
Administrator and selecting the Tunable Tab and specify the cache size.
Datastage provides a set of variables containing useful system information that you can
access from a transform or routine. System variables are read-only.
@DATE the internal date when the program started. See the Date function.
@DAY The day of the month extracted from the value in @DATE.
@FALSE The compiler replaces the value with 0.
@FM A field mark, Char(254).
@IM An item mark, Char(255).
@INROWNUM Input row counter. For use in constraints and derivations in Transformer
stages.
@OUTROWNUM Output row counter (per link). For use in derivations in Transformer
stages.
@LOGNAME The user login name.
@MONTH The current extracted from the value in @DATE.
@NULL The null value.
@NULL.STR The internal representation of the null value, Char(128).
@PATH The pathname of the current Datastage project.
@SCHEMA The schema name of the current Datastage project.
@SM A sub value mark (a delimiter used in Universe files), Char(252).
@SYSTEM.RETURN.CODE
Status codes returned by system processes or commands.
@TIME The internal time when the program started. See the Time function.
@TM A text mark (a delimiter used in UniVerse files), Char(251).
@TRUE The compiler replaces the value with 1.
@USERNO The user number.
@VM A value mark (a delimiter used in UniVerse files), Char(253).
33 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
Datastage Director is GUI to monitor, run, validate & schedule Datastage server jobs.
Datastage developer is one who will code the jobs.datastage designer is one who will
design the job, i mean he will deal with blue prints and he will design the jobs, the stages
that are required in developing the code
W hat will you in a situation where somebody wants to send you a file and use that
file as an input or reference and then run job
Under Windows: Use the 'WaitForFileActivity' under the Sequencers and then run the job.
May be you can schedule the sequencer around the time the file is expected to arrive.
B. Under UNIX: Poll for the file. Once the file has start the job or sequencer depending on
the file.
W hat are the command line functions that import and export the DS jobs?
A sequencer allows you to synchronize the control flow of multiple activities in a job
sequence. It can have multiple input triggers as well as multiple output triggers. The
sequencer operates in two modes: ALL mode. In this mode all of the inputs to the
sequencer must be TRUE for any of the sequencer outputs to fire. ANY mode. In this
mode, output triggers can be fired if any of the sequencer inputs are TRUE
W hat are the Repository Tables in Datastage and what are they?
34 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
W hat difference between operational data stage (ODS) & data warehouse?
100+ jobs for every 6 months if you are in Development, if you are in testing 40 jobs for
every 6 months although it need not be the same number for everybody
1. Go to DataStage Administrator->Projects->Properties->Environment->UserDefined.
Here you can see a grid, where you can enter your parameter name and the
corresponding the path of the file.
2. Go to the stage Tab of the job, select the NLS tab, click on the "Use Job Parameter"
and select the parameter name which you have given in the above. The selected
parameter name appears in the text box beside the "Use Job Parameter" button. Copy the
parameter name from the text box and use it in your job. Keep the project default in the
text box.
W hat is the utility you use to schedule the jobs on a UNIX server other than using
Ascential Director?
35 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
AUTOSYS": Thru autosys u can automate the job by invoking the shell script written to
schedule the datastage jobs.
I think we can call a job into another job. In fact calling doesn't sound good, because you
attach/add the other job through job properties. In fact, you can attach zero or more jobs.
If data is partitioned in your job on key 1 and then you aggregate on key 2, what
issues could arise?
Data will partitioned on both the keys ! hardly it will take more for execution .
Controlling Datstage jobs through some other Datastage jobs. Ex: Consider two Jobs XXX
and YYY. The Job YYY can be executed from Job XXX by using Datastage macros in
Routines.
To Execute one job from other job, following steps needs to be followed in Routines.
Container is a collection of stages used for the purpose of Reusability. There are 2 types
of Containers.
a) Local Container: Job Specific
b) Shared Container: Used in any job within a project. ·
There are two types of shared container:·
1.Server shared container. Used in server jobs (can also be used in parallel jobs).·
2.Parallel shared container. Used in parallel jobs. You can also include server shared
containers in parallel jobs as a way of incorporating server job functionality into a parallel
stage (for example, you could use one to make a server plug-in stage available to a
parallel job).
36 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
W hat r the different type of errors u faced during loading and how u solve them
Check for Parameters. and check for input files are existed or not and also check
for input tables existed or not and also usernames,datasource names, passwords
like that
W hat user variable activity when it used how it used !where it is used with real
example
By using This User variable activity we can create some variables in the job sequnce,this
variables r available for all the activities in that sequnce.
I want to process 3 files in sequentially one by one , how can i do that. while
processing the files it should fetch files automatically .
If the metadata for all the files r same then create a job having file name as parameter,
then use same job in routine and call the job with different file name...or u can create
sequencer to use the job...
W hat happens out put of hash file is connected to transformer..W hat error it
through?
If Hash file output is connected to transformer stage the hash file will consider as the
Lookup file if there is no primary link to the same Transformer stage, if there is no primary
link then this will treat as primary link itself. you can do SCD in server job by using Lookup
functionality. This will not return any error code.
iconv is used to convert the date into into internal format i.e only datastage can understand
example :- date comming in mm/dd/yyyy format
datasatge will conver this ur date into some number like :- 740
u can use this 740 in derive in ur own format by using oconv.
suppose u want to change mm/dd/yyyy to dd/mm/yyyy.now u will use iconv and oconv.
I have never tried doing this, however, I have some information which will help you in
saving a lot of time. You can convert your server job into a server shared container. The
server shared container can also be used in parallel jobs as shared container.
37 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
I am using DataStage 7.5, Unix. we can use shared container more than one time in the
job.There is any limit to use it. why because in my job i used the Shared container at 6
flows. At any time only 2 flows are working. can you please share the info on this.
DataStage from Staging to MDW is only running at 1 row per second! What do we do to
remedy?
I am assuming that there are too many stages, which is causing problem and providing the
solution.
In general. if you too many stages (especially transformers , hash look up), there would be
a lot of overhead and the performance would degrade drastically. I would suggest you to
write a query instead of doing several look ups. It seems as though embarassing to have a
tool and still write a query but that is best at times.
If there are too many look ups that are being done, ensure that you have appropriate
indexes while querying. If you do not want to write the query and use intermediate stages,
ensure that you use proper elimination of data between stages so that data volumes do
not cause overhead. So, there might be a re-ordering of stages needed for good
performance.
1) for massive transaction set hashing size and buffer size to appropriate values to
perform as much as possible in memory and there is no I/O overhead to disk.
2) Enable row buffering and set appropate size for row buffering
W hat is the flow of loading data into fact & dimensional tables?
What is the difference between sequential file and a dataset? When to use the copy stage?
Sequential Stage stores small amount of the data with any extension in order to access the
file where as Dataset is used to store huge amount of the data and it opens only with an
extension (.ds) The Copy stage copies a single input data set to a number of output
datasets. Each record of the input data set is copied to every output data set. Records
can be copied without modification or you can drop or change the order of columns.
Runtime column propagation (RCP): If RCP is enabled for any job, and specifically for
those stages whose output connects to the shared container input, then meta data will be
propagated at run time, so there is no need to map it at design time.
If RCP is disabled for the job, in such case OSH has to perform Import and export every
time when the job runs and the processing time job is also increased.
W hat are Routines and where/how are they written and have you written any
routines before?
Routines are stored in the Routines branch of the DataStage Repository,where you can
38 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
create, view, or edit them using the Routine dialog box. The following program components
are classified as routines:• Transform functions. These are functions that you can use
whendefining custom transforms. DataStage has a number of built-intransform functions
which are located in the Routines ➤ ➤
Examples Functions branch of the Repository. You
can also defineyour own transform functions in the Routine dialog box.• Before/After
subroutines. When designing a job, you can specify asubroutine to run before or after the
job, or before or after an activestage. DataStage has a number of built-in before/after
subroutines,which are located in the Routines ➤ ➤
Built-in Before/Afterbranch in the
Repository. You can also define your ownbefore/after subroutines using the Routine dialog
box.• Custom UniVerse functions. These are specialized BASIC functionsthat have been
defined outside DataStage. Using the Routinedialog box, you can get DataStage to create
a wrapper that enablesyou to call these functions from within DataStage. These
functionsare stored under the Routines branch in the Repository. Youspecify the category
when you create the routine. If NLS is enabled,
Routines are used for implementing the business logic they are two types 1) Before Sub
Routines and 2)After Sub Routinestepsdouble click on the transformer stage right click on
any one of the mapping field select [dstoutines] option within edit window give the business
logic and select the either of the options( Before / After Sub Routines)
1.Establish Baselines
2.Avoid the Use of only one flow for tuning/performance testing
3.Work in increment
4.Evaluate data skew
5.Isolate and solve
6.Distribute file systems to eliminate bottlenecks
7.Do not involve the RDBMS in initial testing
8.Understand and evaluate the tuning knobs available.
ORABULK is used to load bulk data into single table of target oracle database.
BCP is used to load bulk data into a single table for microsoft sql server and sysbase
Open the ODBC Data Source Administrator found in the control panel/administrative tools.
Under the system DSN tab, add the Driver to Microsoft Excel.
Then u'll be able to access the XLS file from Datastage.
OCI doesn't mean the orabulk data. It actually uses the "Oracle Call Interface" of the
oracle to load the data. It is kind of the lowest level of Oracle being used for loading the
data.
39 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
Lookup: Lookup reference to another stage or Database to get the data from it and
transforms to other database.
LookupFileSet: It allows you to create a lookup file set or reference one for a lookup. The
stage can have a single input link or a single output link. The output link must be a
reference link. The stage can be configured to execute in parallel or sequential mode when
used with an input link. When creating Lookup file sets, one file will be created for each
partition. The individual files are referenced by a single descriptor file, which by convention
has the suffix .fs.
SharedContainer:
Step1:Select the stages required
Step2:Edit>Construct Container>Shared
Shared containers are stored in the Shared Containers branch of the Tree Structure
There are many ways to populate one is writing SQL statement in oracle is one way
W hat are the differences between the data stage 7.0 and 7.5 in server jobs?
There are lot of Differences: There are lot of new stages are available in DS7.5 For Eg:
CDC Stage Stored procedure Stage etc..
Data stage Director. A user interface used to validate, schedule, run, and monitor
Datastage jobs.
Datastage Manager. A user interface used to view and edit the contents of the
Repository.
40 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
There are the variables used at the project or job level. We can use them to configure the
job ie.we can associate the configuration file (Without this u can not run ur job), increase
the sequential or dataset read/ write buffer.
ex: $APT_CONFIG_FILE
Like above we have so many environment variables. Please go to job properties and click
on "add environment variable" to see most of the environment variables.
When we say "Validating a Job", we are talking about running the Job in the "check only"
mode. The following checks are made:
When the source data is anormous or for bulk data we can use OCI and SQL loader
depending upon the source
W here we use link Partitioner in data stage job? explain with example?
We use Link Partitioner in DataStage Server Jobs.The Link Partitioner stage is an active
stage which takes one input andallows you to distribute partitioned rows to up to 64 output
links.
Purpose of using the key and difference between Surrogate keys and natural key
We use keys to provide relationships between the entities (Tables). By using primary and
foreign key relationship, we can maintain integrity of the data.
The natural key is the one coming from the OLTP system.
The surrogate key is the artificial key which we are going to create in the target DW. We
can use these surrogate keys insted of using natural key. In the SCD2 scenarions
surrogate keys play a major role
We have to create users in the Administrators and give the necessary privileges to users.
41 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
Is it possible to move the data from oracle ware house to SAP W arehouse using
with DATASTAGE Tool.
We can use Datastage Extract Pack for SAP R/3 and DataStage Load Pack for SAP BW
to transfer the data from oracle to SAP Warehouse. These Plug In Packs are available
with DataStage Version 7.5
How to implement type2 slowly changing dimensions in data stage?explain with example?
We can handle rejected rows in two ways with help of Constraints in a Tansformer.1) By
Putting on the Rejected cell where we will be writing our constraints in the properties of the
Transformer2)Use REJECTED in the expression editor of the Constraint Create a hash file
as a temporary storage for rejected rows. Create a link and use it as one of the output of
the transformer. Apply either of the two steps above said on that Link. All the rows which
are rejected by all the constraints will go to the Hash File.
Does Enterprise Edition only add the parallel processing for better performance?
W hat is the utility you use to schedule the jobs on a UNIX server other than using
Ascential Director?
42 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
AUTOSYS": Through autosys we can automate the job by invoking the shell script written
to schedule the Datastage jobs.
I think we can call a job into another job. In fact calling doesn't sound good, because you
attach/add the other job through job properties. In fact, you can attach zero or more jobs.
Steps will be Edit --> Job Properties --> Job Control, Click on Add Job and select the
desired job.
If data is partitioned in your job on key 1 and then you aggregate on key 2, what issues could arise?
Data will partition on both the keys! Hardly will it take more for execution.
Ans.
1) E-R Diagrams
2) Dimensional modeling
a) logical modeling b) Physical modeling
Controlling Datstage jobs through some other Datastage jobs. Ex: Consider two Jobs XXX
and YYY. The Job YYY can be executed from Job XXX by using Datastage macros in
Routines.
To execute one job from other job, following steps needs to be followed in Routines.
1. Attach job using DSAttachjob function.
2. Run the other job using DSRunjob function
3. Stop the job using DSStopJob function
Constraint specifies the condition under which data flow through the output link.
Constraint which output link is used. Constraints are nothing but business rule or logic.
43 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
For example-we have to split customers.txt file into customer address files based on
customer country, we need to pass constraints. Suppose we want us customer addresses
we need to pass constraint for us customer.txt file. Similarly for Canadian and Australian
customer.
Constraints are used to check for a condition and filter the data. Example: Cust_Id<>0 is
set as a constraint and it means and only those records meeting this will be processed
further.
Derivation is a method of deriving the fields, for example if you need to get some SUM,
AVG etc.derivations specifies the expression to pass values to the target column. For
simple example input column is a derivation that passes the value to target column.
Any Datastage objects including whole projects, which are stored in manager repository,
can be exported to a file. This exported file can then imported back into Datastage.
Complex design means having more joins and more lookups. Then that job design will be
called as complex job. We can easily implement any complex design in Datastage by
following simple tips in terms of increasing performance also. There is no limitation of using
stages in a job. For better performance, Use at the Max of 20 stages in each job. If it is
exceeding 20 stages then go for another job. Use not more than 7 look ups for a
transformer otherwise go for including one more transformer.
Validation guarantees that Datastage job will be successful, it carry out fallowing without
actually data processing.
1) Connections are made for sources.
2) Opens the files.
3) Prepares the sql statements necessary for fetching the data.
4) It makes all connection from source to target that ready for data processing from
source to target.
5) Check for Parameters. And check for input files are existed or not and also check
for input tables existed or not and also usernames, data source names,
passwords like that
W hat r the different type of errors u faced during loading and how u solves them?
How do you fix the error "OCI has fetched truncated data" in DataStage
Can we use Change capture stage to get the truncated data’s? Members please
confirm
W hat user variable activity when it used how it used !where it is used with real
44 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
example
By using This User variable activity we can create some variables in the job sequnce, this
variables r available for all the activities in that sequence.
1) If an input file has an excessive number of rows and can be split-up then use standard
logic to run jobs in parallel
ANS: row partitioning and collecting.
If u have SMP machines u can use IPC, link-colector, link-partitioner for performance
tuning If u have cluster,MPP machines u can use parallel jobs
Yes you can implement Type1 Type2 or Type 3. Let me try to explain Type 2 with time
stamp.
Step :1 time stamp we are creating via shared container. it return system time and one
key. For satisfying the lookup condition we are creating a key column by using the column
generator.
Step 2: Our source is Data set and Lookup table is oracle OCI stage. by using the change
capture stage we will find out the differences. the change capture stage will return a value
for chage_code. based on return value we will find out whether this is for insert , Edit,??or
update. if it is insert we will modify with current timestamp and the old time stamp will keep
as history.
Sep 19
Summarize the differene between OLTP,ODS AND DATA WAREHOUSE ?
OLTP - means online transaction processing, it is nothing but a database, we are calling
oracle, sqlserver, and db2 are olap tools.
OLTP databases, as the name implies, handle real time transactions which inherently have
some special requirements.
ODS- stands for Operational Data Store. Its a final integration point ETL process we load
the data in ODS before you load the values in target..
Data Warehouse- Datawarehouse is collection of integrated, time varient, non volatile and
time variant collection of data which is used to take management decisions.
45 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
Why OLTP database are designs not generally a good idea for a Data Warehouse
OLTP cannot store historical information about the organization. It is used for storing the
details of daily transactions while a data warehouse is a huge storage of historical
information obtained from different datamarts for making intelligent decisions about the
organization.
What is data cleaning? How is it done?
I can simply say it as Purifying the data.
Data Cleansing: the act of detecting and removing and/or correcting a database’s dirty
data (i.e., data that is incorrect, out-of-date, redundant, incomplete, or formatted
incorrectly)
What is a level of Granularity of a fact table?
Level of granularity means level of detail that you put into the fact table in a data
warehouse. For example: Based on design you can decide to put the sales data in each
transaction. Now, level of granularity would mean what detail are you willing to put for each
transactional fact. Product sales with respect to each minute or you want to aggregate it
upto minute and put that data.
It also means that we can have (for example) data agregated for a year for a given
product as well as the data can be drilled down to Monthly, weekl and daily basis…teh
lowest level is known as the grain. going down to details is Granularity
Which columns go to the fact table and which columns go the dimension table?
The Aggreation or calculated value colums will go to Fac Tablw and details information will
go to diamensional table.
To add on, Foreign key elements along with Business Measures, such as Sales in $ amt,
Date may be a business measure in some case, units (qty sold) may be a business
measure, are stored in the fact table. It also depends on the granularity at which the data
is stored.
What is the main difference between schema in RDBMS and schemas in Data Warehouse….?
46 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
RDBMS Schema
* Used for OLTP systems
* Traditional and old schema
* Normalized
* Difficult to understand and navigate
* cannot solve extract and complex problems
* poorly modelled
DWH Schema
* Used for OLAP systems
* New generation schema
* De Normalized
* Easy to understand and navigate
* Extract and complex problems can be easily solved
* Very good model
What is the need of surrogate key; why primary key not used as surrogate key?
Surrogate Key is an artificial identifier for an entity. In surrogate key values are?
Generated by the system sequentially (Like Identity property in SQL Server and Sequence
in Oracle). They do not describe anything.
Primary Key is a natural identifier for an entity. In Primary keys all the values are entered
manually by the users which are uniquely identified. There will be no repetition of data.
Need for surrogate key not Primary Key
If a column is made a primary key and? Later there needs? a change in the data type or
the length for that column then all the foreign keys that are dependent on that primary key
should be changed making the database Unstable
Surrogate Keys make the database more stable because it insulates the Primary and
foreign key relationships from changes in the data types and length.
What is WH scheme like “star scheme”, “snow flake” and there advantages /
disadvantages under different conditions?
How to design an optimized data warehouse both from data upload and query
performance point of view?
What to exactly is parallel processing and partitioning & how it can be employed for
optimizing the data warehouse design?
What are preferred indexes & constraints for DWH ?
How the volume of data (from medium to very high) and frequency of querying will effect
the d/n considerations ?
why DATAWARE HOUSE ?
Different between OLTP & OLAP ?
What is the feature of DWH ?
Do you know some more ETL TOOL ?
what is the use of staging Area ?
Do you know the life cycle of WH ?
Did you heard about star
Tell me about ur –self ?
How many dimension & Fact are there in ur project ?
What is Dimension ?
Different between DWH & DATA MART ?
1. How can you Explain DWH to a Lay man?
2. What is Molap and Rolap? What is Diff between Them?
3. what are Diff Schemas used in DWH? Which one is
most Commonly Used?
47 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
Oracle :
how many type of Indexes are there..
In ware house which indexes are used
what is diff betw Trancate and Delete table ..
how do you Optimise the Query..Read Optimisation in
Oracle..
Project :
Project Description and All....
48 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
1] W hat is the difference between snow flake and star flake schemas?
2] How will u come to know that u have to do performance tuning?
3] Describe project
4] How many dimensions and facts in your project?
5] Draw scd type 1 and scd type 2
6] If from target you are getting timestamp data and you have one port in target
having data type as date then how will u load it?
7] W hat is the different type of lookups?
8] W hat condition will you give in update strategy transformation in scd type 1?
9] W hat the different type of variables in update transformation?
10] W hat is target based commit and source based commit?
11] W hy you think scd type 2 is critical?
12] W hat is the type of facts?
13] W hat is fact less fact?
14] If I am returning one port through connected lookup then why you need
unconnected lookup?
15] If from flat file some duplicate rows are coming then how will you remove it
using informatica?
16] If from relational table duplicate rows are coming then how will you remove
them using informatica?
17] If I did not give group by option in aggregator transformation then what will be
the result?
18] W hat is multidimensional analysis?
19] If I give all the characteristics of datawearhouse to oltp then will it be data
warehouse?
20] W hat are the characteristics of datawearhouse?
21] W hat is the break up of your team?
22] How will you do performance tuning in mapping?
23] W hich is good for performance static or dynamic cache?
24] W hat is target load order?
25] W hat is the transformation you worked on?
26] W hat is the naming convention you are using?
27] How are you getting data from client?
28] How will you convert rows into column and column into rows using informatica?
29] How will you enable test load?
30] Did you work with connected and unconnected lookup tell the difference
31] Did you ever use normalizer?
49 of 50 6/4/2011 8:44 AM
datastage 8.1: September 2010 http://datastageinfosoft.blogspot.com/2010_09_01_archive.html
50 of 50 6/4/2011 8:44 AM