Sie sind auf Seite 1von 13

Selectorweb.

com home > Ab Initio Ab Initio On This Page - intro


Intro ------------------------------

New York Email

Other Pages - xx

The Latin term ab initio means from the beginning . "Ab Initio Software LLC" is a company which excels in solving extreme data processing problems. Many IT people never heard of Ab Initio. Why? Well, first, Ab Initio never advertise themselves. They get lots of business by referral - in fact so much that they don't need any advertising. Second, because Ab Initio only works with few clients who have extreme data processing problems. Ab Initio is not common, and they don't sell software. They sell solutions - and license the tools to provide those solutions. So it is more a solutions company, not a software company. Most of those people who have heard about Ab Initio think about it as an ETL provider. This is wrong. Yes, Ab Initio has excellent tools for ETL (Extract, Transform, Load). But for some problems they provide solutions which have nothing to do with databases. In fact, in many situations they recommend to STOP using database at all for performance reasons. If you are a small or medium client - Ab Initio is an overkill. But if you have thousands of transactions per second, big databases, very active web site, or huge transactional or accounting system - Ab Initio is a savior. Their pricing model is a bit unusual, but the long term costs are reasonable. You can read a short description on wikipedia, but as of today (20098) this description doesn't give a good honest representation of the company (in my opinion). y y y y y y http://en.wikipedia.org/wiki/Ab_Initio http://www.abinitio.com http://www.patents.com/Ab-Initio-SoftwareCorporation/Lexington/MA/301339/company/ http://www.bi-nerd.com/ab-initio-the-dark-horse-of-etl/ Patents: US6654907.pdf, US7047232.pdf, US7164422.pdf, US7167850.pdf http://www.linkedin.com/companies/ab-initio

Ab Initio is a private company, its main offices are in Lexington, Massachusetts (near Boston, USA - since 1994), but they have offices all over the world (as you can see on their web site). They have very good talented devoted people. I've heard that when you are calling their customer service - there is a 75% chance that you will speak with a Ph.D.. It may very well be true. The company was formed by former employees of the Thinking Machines Corporation. Some key people: Craig W. Stanfill, Richard A. Shapiro, Stephen A. Kukolich.

Ab Initio also uses its own people as well as independent consulting firms to build proof of concept for a client, and then to guide clients in using their tools. Unfortunately Ab Initio provides very little information about their solutions to general public. So not getting into details, most of AI functionality can be scripted using several commands which you can give from prompt (with many options): y y y m_* commands ( for example, m_shutdown, m_mkfs, m_cp, etc. ) are used for administering mp ... (some options) - to define, establish, and run jobs air ... (some options) - to work with EME (basically a specialized version control system)

The scripts can be easily integrated to work with external schedulers. Somewhere ~1997 Ab Initio has introduced Graphical Development Environment - a very powerful desktop software. You place components on the screen, connect them, define what they do and how. So your application is a graph. You can create components which consist of other components which consist of other components, etc. - so effectively you can drill deeply into the diagram. I've seen this tool generating powerful data processing application in less than 10 minutes. You can run the application right from the IDE, or save it as a set of scripts (ksh for unix). The scripts will call misc. component libraries. The libraries are written in C++. Some of the key elements of the system: y y y y y y "Co>Operating System" "Component Library" "Graphical Development Environment" (GDE) "Enterprise Meta>Environment" (EME) "Data Profiler" "Conduct>It"

Main power of Ab Initio - parallelism - is achieved via its "Co>Operating System" which provides the facilities for "parallel execution (multiple CPUs and/or multiple boxes), platform independent data transport, check pointing, and process monitoring. A lot of attention is devoted to monitoring resources (CPU, memory). multi-file, multi-directory. Component Library - a set of software modules to perform sorting, data transforming, and high speed data loading and unloading tasks. Ab Initio tools incorporate best practices, such as check-pointing, rerunnability, tagging everything with unique Id-s, etc. Unfortunately Ab Initio doesn't advertise or publish any information. So there are just bits and pieces here and there. Here is an interesting blog: y 1 http://www.geekinterview.com/Interview-Questions/Data-Warehouse/Abinitio

Question

Answer ================================================ ========== Phases - are used to break the graph into pieces. Temporary files created during a phase will be deleted after its completion. Phases are used to effectively separately manage resource-consuming (memory, CPU, disk) parts of the application. Checkpoints - created for recovery purposes. These are points where everything is written to disk. You can recover to the latest saved point - and rerun from it. You can have phase breaks with or without checkpoints. A new sandbox will have many directories: mp, dml, xfr, db, ... . xfr is a directory where you put files with extension .xfr containing your own custom functions (and then use : include "somepath/xfr/yourfile.xfr"). Usually XFR stores mapping. 1) Data Parallelism - data (partitioning of data into parallel streams for parallel processing).

Phases vs Checkpoint s

xfr

three types of parallelism

2) Component Parallelism (execute simultaneously on different branches of the graph) 3) Pipeline (sequential). Multi-File System m_mkfs - create a multifile (m_mkfs ctrlfile mpfile1 ... mpfileN) m_ls - list all the multifiles m_rm - remove the multifile m_cp - copy a multifile m_mkdir - to add more directories to existing directory structure y y y y Each partition of a component uses: ~ 8 MB + max-core (if any) Add size of lookup files used in phase (if multiple components use same lookup only count it once) Multiply by degree of parallelism. Add up all components in a phase; that is how much memory is used in that phase. Select the largest-memory phase in the graph

MFS

Memory requirement s of a graph

How to calculate a SUM dedup sort with null key

SCAN ROLLUP SCANWITHROLLUP Scan followed by Dedup sort and select the last If we don't use any key in the sort component while using the dedup sort, then the output depends on the keep parameter.

y y y join on partitioned flow checkin, checkout how to have different passwords for QA and production How to get records 5075 out of 100 Hot to convert a serial file into FFS

first - only the first record last - only last record unique_only - there will be no records in the output file.

file1 (A,B,C) , file2 (A,B,D). We partition both files by "A", and then join by "A,B". IS it OK? Or should we partition by "A,B" ? Not clear. You can do checkin/checkout using the wizard right from the GDE using versions and tags

parameterize the .dbc file - or use environmental variable.

y y y

use scan and filter m_dump <dml> <mfs file> -start 50 -end 75 use next_in_sequence() function and filter by expression component (next_in_sequence() >50 && next_in_sequence() <75)

create MFS, then use partition component

project parameters When you check out a project into your sandbox - you get project parameters. vs. Once in your sandbox - you can refer to them as sandbox parameters. sandbox parameters BadStraightflow merging graphs partitioning , repartitioning , departitioni ng lookup file indexing error you get when connecting mismatching components (for example, connecting serial flow directly to mfs flow without using a partition component) You can not merge two ab initio graphs. You can use the ouput of one graph as input for another. You can also copy/paste the contents between graphs. See also about using .plan y y y partitioning - dividing a single flow of records(serial file, mfs) into multiple flows. departitioning - removing partitionning (gather an merge component) re-partitioning - change the number of partitions (eg, from 2 to 4 flows)

for large amounts of data use MFS lookup file (instead of serial) No indexes as such. But there is an "output indexing" using reformat and doing necessary coding in transform part.

Environme nt project Aggregate vs Rollup

Environment project - special public project that exists in every Ab Initio environment. It contains all the environment parameters required by the private or public projects which constitute AI Standard Environment. Aggregate - old component Rollup - newer, extended, recommended to use instead of Agregate. (built-in functions like sum count avg min max product, ...) y EME = Enterprise Metdata Environment. Functions (repository, version control, statistical analysis, dependency analysis). It is on the server side and holds all the projects (metadata of transformations, config info, source and target info: graph dml xfr ksh sql, etc..). This is where you checkin/checkout. /Project dir of EME contains common directories for all application sandboxes connected to it. It also helps in dependency analysis of codes. Ab Initio has series of air commands to manipulate repository objects. GDE = Graphical Devlopment Environment (on the client box) Co-operating sytem = Ab Initio server installed on top of native (unix) os on the server

EME, GDE, Cooperating sytem y y

fencing means job controlling on priority basis. In AI it actually refers to customized phase breaking. A well fenced graph means no matter what is source data volume process will not cough in dead locks. It actually limits the number of simultaneous processes. fencing Fencing - changing a priority of a job Phasing - managing the resources to avoid deadlocks. For example, limiting the number of simultaneous processes (by breaking the graph into phases, only 1 of which can run at any given time) Continuous component s 2 Answer ================================================= ========= Deadlock is when two or more processes are requesting the same resource. To avoid use phasing and resource pooling. y y y y y AB_HOME - where co>operating system is installed AB_AIR_ROOT - default location for EME datastore sandboxes standard environment AI_SORT_MAX_CORE, AI_HOME, AI_SERIAL, AI_MFS, etc. from unix prompt: env | grep AI Continuous components - produce useful output file while running continously. For example, Continuous rollup, Continuous update batch subscribe

Question

deadlock

environme nt

wrapper

unix script to run graphs

script A multistage component is a component which transforms input records in 5 multistage stages (1.input select, 2.temporary initialization, 3.processing, 4. output componen selection, 5.finalize). So it is a transform component which has packages. t Examples: scan Normalize and Denormalize, rollup scan normalize and denormalize sorted. Dynamic DML Dynamic DML is used if the input metadata can change. Example: at different time different input files are recieved for processing which have different dml. in that case we can use flag in the dml and the flag is first read in the input file recieved and according to the flag its corresponding dml is used. y y fan out - partition component (increase parallelism) fan in departition component (decrease parallelism)

fan in, fan out lock join vs lookup multi update

a user can lock the graph for editing so that others will see the message and can not edit the same graph. Lookup is good for spped for small files (will load whole file in memory). For large files use join. You may need to increase the maxcore limit to handle big joins. multi update executes SQL statements - it treats each input record as a completely separate piece of work. y y We can use Autosys, Control-M, or any other external scheduler. We can take care of dependencies in many ways. For example, if scripts should run sequentially, we can arrange for this in Autosys, or we can create a wrapper script and put there several sequential commands (nohup command1.ksh & ; nohup command2.ksh &; etc). We can even create a special graph in Ab Initio to execute individual scripts as needed.

scheduler

Api and Utility These are database interfaces (api - uses SQL, utility - bulk loads, whatever modes in vendor provides) input table y lookup file y lookup file component. Functions: lookup, lookup_count, lookup_next, lookup_match, lookup_local. Lookups are always used with combination of the reformat components.

Calling stored proc in DB

You can call stored proc (for example, from input component). In fact, you can even write SP in Ab Initio. Make it "with recompile" to assure good performance.

Frequently used string_ltrim, string_lrtrim, string_substring, reinterpret_as, today(), now() functions data is_valid, is_null, is_blank, is_defined

validation driving port When joining inputs (in0, in1, ...) one of the ports is used as "driving (by default - in0). Driving input is usually the largest one. Whereas the smallest can have "Sorted-Input" parameter be set to "Input need not be sorted" because it will be loaded completely in memory. Ab Initio benefits: parallelism built in, mulitifile system, handles huge amounts of data, easy to build and run. Generates scripts which can be easily modified as needed )if something couldn't be done in ETL tool itself). The scripts can be easily scheduled using any external scheduler - and easily integrated with other systems. Ab Initio vs Informatic a for ETL Ab Initio doesn't require a dedicated administrator. Ab Initio doesn't have built-in CDC capabilities (CDC = Change Data Capture). Ab Initio allows to (attach error / reject files) to each transformation and capture and analyze the message and data separately (as opposed to Informatica which has just one huge log). Ab Initio provides immediate metrics for each component. override key control file override key option is used when we need to join 2 fields which have different field names. control file should be in the multifile directory (contains the addresses of the serial files) max-core parameter (for example, sort 100 MBytes) specifies the amount of memory used by a component (like Sort or Rollup) - per partition - before spilling to disk. Usually you don't need to change it - just use default value. Setting it too high may degrade the performance because of OS swapping and degrading of the performance of other components. graph > select parameters tab > click "create" - and create a parameter. Usage: $paramname. Edit > parameters. These parameters will be substituted during run time. You may need to declare you parameter scope as formal. Each component has reject, error, and log ports. Reject captures rejected records, Error captures corresponding error, and log captures the execution statistics of the component. You can control reject status of each component by setting reject threshold to either Never Abort, Abort on first reject, or setting ramp/limit. You can also use force_error() function in transform function.

max-core

Input Parameter s

Error Trapping

3 Answer ============================================== ============ In GDE goto options View > Tracking Details - will see each component's CPU and memory usage, etc.

Question How to see resource usage

assign keys component

Easy and saves development time. Need to understand how to feed parameters, and you can't control it easily. y Scenario 1 (preferred): we run query which joins 2 tables in DB and gives us the result in just 1 DB component. Scenario 2 (much slower): we use 2 database components, extract all data - and join them in Ab Initio.

Join in DB vs join in Ab Initio

Join with DB

not recommended if number of records is big. It is better to retrieve the data out - and then join in Ab Initio. Parameter showing how data is unevenly distributed between partitions.

Data Skew skew = (partition size - avg.part.size)* 100 / (size of the largest partition) .dbc - database configuration file (dbname, nodes, version user/pwd) resides in the db directory dbc vs cfg .cfg - any tyoe of config file. for example, remote connection config (name of remote server, user/pwd to connect to db, location of OS on remote machine, connection method). .cfg file resides in the config dir. depth not equal data format error etc... compilation errors types of partitions unused port depth error : we get this error.. when two components connected together but does't match there layout broadcast pbyexpression pbyroundrobin pbykey pwithloadbalance when joining, used records go to the output port, unused records - to the unused port y y y y y y y Go parallel using partitionning. Roundrobin partitionning gives good balance. Use Multi-file system (MFS). Use Ad Hoc MFS to read many serial files in parallel, and use concat component. Once data is partitionned - do not switch it to serial and back. Repartition instead. Do not acceess large filess via NFS - use FTP instead use lookup local rather than lookup (especially for big lookups). Use rollup and Filter as soon as possible to reduce number of records. Ideally do it in the source (database ?) before you get the data. Remove unnecessary components. For example, instead of using filter by exp, you can implement the same function in reformat/Join/Rollup. Another example - when joining data from 2 files, use union function instead of adding an additional component for removing duplicates. use gather instead of concatenate. it is faster to do a sort after a partitino, than to do a sort before a

tuning performance

y y

y y

y y

y y y y

y y y y

y y y

partition. try to avoid using a join with the "db" component. when getting data from database - make sure your queries are fast (use indexes, etc.). If possible, do necessary selection / aggregation / sorting in the database before getting data into Ab Initio. tune Max_core for Optimal performance (for sort depends on the size of the input file). Note - If in-memory join cannot fit its non-driving inputs in the provided MAX-CORE, then it will drop all the inputs to disk and inmemory does not make sence. Using phase breaks let you allocate more memory in individual components - thus improving performance. Use checkpoint after sort to land data on disk Use Join and rollup in-memory feature When joining very small dataset to a very large dataset it is more efficient to broadcast the small dataset to MFS using broadcast component, or use the small file as lookup. But for large dataset don't use broadcast as a partitioner. Use Ab Initio layout instead of database default to achieve parallel loads Change AB_REPORT parameter to increased monitoring duration Use catalogs for reusability Components like join/ rollup should have the option "Input must be sorted" if they are placed after a sort component. minimize number of sort components. Minimize usage of sorted join component, and if possible replace them by in-memory join/hash join. Use only required fields in the sort reformat join components. Use "Sort within Groups" instead of just Sort when data was already presorted. Use phasing/flow buffers in case of merge sorted joins Minimize the use of regular expression functions like re_index in the transfer functions Avoid repartitioning of data unnecessarily. When splitting records into more than two flows, use Reformat rather than Broadcast component. For joining records from 2 flows use Concatenate component ONLY when there is a need to follow some specific order in joining records. If no order is required then it is preferable to use Gather component. Instead of putting many Reformat components consecutively, use output indexes parameter in the first Reformat component and mention the condition there. Delta table maintain the sequencer of each data table. Master (or base) table - a table on tp of which we create a view

delta table

y y

scan vs rollup

rollup - performs aggregate calculations on groups, scan - calculates cumulative totals

packages Reformat vs "Redefine Format"

used in multistage components or transform components y y Reformat - deriving new data by adding/dropping fields Redefine format - rename fields

Conditional DML DML which is separated based on a condition y SORTWITHINGR OUP The prerequisit for using sortwithingroup is that the data is already sorted by the major key. sortwithingroup outputs the data once it has finished reading the major key group. It is like an implicit phase.

passing a condition as a parameter

Define a Formal Keyword Parameter of type string. For example, you call it FilterCondition, and you want it to do filtering on COUNT > 0 . Also in your graph in your "Filter by expression" Component enter following condition: $FilterCondition Now on your command line or in wrapper script give the following command YourGraphname.ksh -FilterCondition COUNT > 0 #!/bin/ksh #Running the set up script on enviornment typeset PROJ_DIR $(cd $(dirname $0)/..; pwd) . $PROJ_DIR/ab_project_setup.ksh $PROJ_DIR #Exporting the script parameter1 to INPUT_FILE_NAME if [ $# -ne 2 ]; then INPUT_FILE_PARAMETER_1 $1 INPUT_FILE_PARAMETER_2 $2 # This grpah is using the input file cd $AI_RUN ./my_graph1.ksh $INPUT_FILE_PARAMETER_1 # This graph also is using the input file. ./my_graph2.ksh $INPUT_FILE_PARAMETER_2 exit 0; else echo Insufficient parameters exit 1; fi ------------------------------------#!/bin/ksh #Running the set up script on enviornment typeset PROJ_DIR $(cd $(dirname $0)/..; pwd) . $PROJ_DIR/ab_project_setup.ksh $PROJ_DIR #Exporting the script parameter1 to INPUT_FILE_NAME export INPUT_FILE_NAME $1 # This grpah is using the input file cd $AI_RUN ./my_graph1.ksh

Passing file name as a parameter

# This graph also is using the input file. ./my_graph2.ksh exit 0; How to remove header and trailer lines? How to create a multi file system on Windows use conditional dml where you can separate detail from header and trailer. For validations use reformat with count :3 (out0:header out1:detail out2:trailer.) y y first method: in GDE go to RUN > Execute Command - and run m_mkfs c:control c:dp1 c:dp2 c:dp3 c:dp4 second method: double-click on the file component, and in ports tab double-click on partitions - there you can enter the number of partitions.

Vector Dependency Analysis

A vector is simply an array. It is an ordered set of elements of the same type (type can be any type, including a vector or a record). Dependency analysis will answer the questions regarding datalinage, that is where does the data come from what applications prodeuce and depend on this data etc..

4 Answer ================================================= ========= There are many ways to create a surrogate key. For example, you can use next_in_sequence() function in your transform. Or you can use "Assign key values" component. Or you can write a stored procedure - and call it. Note: if you use partitions, then do something like this: (next_in_sequence()-1)*no_of_partition()+this_partition() This is a config file for ab initio - in user's home directory and in $AB_HOME/Config. It sets abinitio home path, configuration variables (AB_WORK_DIR, AB_DATA_DIR, etc.), login info (id, encrypted password), login methods for hosts for execution (like EME host, etc.), etc. your ksh init file ( environment, aliases, path variables, history file settings, command prompt settings, etc.)

Question

Surrogate key

.abinitiorc

.profile data mapping, data modelling Hwo to execute

From GDE - whole graph or by phases. From checkpoint. Also using ksh scripts

the graph Write Multiplefile A component which allows to write simultaneously into multiple local files s Testing Sandbox vs EME Run the graph - see the results. Use components from Validate category. Sandbox is your private area where you develop and test. Only one project and one version can be in the sandbox at any time. The EME Datastorecontains all versions of the code that have been checked into it (source control). Where the data-files are and where the components are running. For example, for data - serial or partitioned (multi-file). The layout is defined by the location of the file (or a control file for the multifile). In the graph the layout can propagate automatically (for multifile you have to provide details). April 2009: GDE ver.1.15.6, Co-operative system ver 2.14. menu edit > parameters - allows you to specify private parameters for the graph. They can be of 2 types - local and formal. You can define pre- and post-processes, triggers. Also you can specify methods to run on success or on failure of the graphs. y y y y y y y y y y y y y y input file / output file input table / output table lookup / lookup_local reformat gather / concatenate join runsql join with db compression components filter by expression sort (single or multiple keys) rollup trash partition by expression / partition by key

Layout

Latest versions Graph parameter s Plan>It

Frequently used componen ts

running on hosts

co>operating system is layered on top of native OS (unix). When running from GDE, GDE generates a script (according to "run" setings). Co>op system will execute the scripts on different machines (using specified host settings and connection methods, like rexec telnet rsh rlogin) - and then return error or success codes back. This is basically an Oracle question - regarding SQLLDR (SQL Loader) utility. Conventional load - using insert statements. All triggers will fire, all contraints will be checked, all indexes will be updated.

conventio nal loading vs direct

loading

Direct load - data is written directly block by block. Can load into specific partition. Some constraints are checked, indexes may be disabled - need to specify native options to skip index maintenance. abinitio online help gives 3 examples of joins: inner join, outer join, and semi join.

semi-join

y y y

for inner join 'record_requiredN' parameter is true for all "in" ports. for outer join it is false for all the "in" ports. for semi join it is true for both port (like InnerJoin), but the dedup option is set only on one side

http://www.geekinterview.com/Interview-Questions/Data-Warehouse/Abinitio/page10 10,11,12,13,14,15,16,17,18,19,20
Some alternative vendors:

y y y y

etl.html , http://en.wikipedia.org/wiki/Etl#Tools http://en.wikipedia.org/wiki/IBM_InfoSphere_DataStage http://en.wikipedia.org/wiki/Expressor, http://www.expressor-software.com http://en.wikipedia.org/wiki/Informatica

Disclaimer

This page contains only data publicly available on the web. It doesn't contain any secret or proprietary information. Answer Questio =================================================== n ======= xxxx Xxxx

Answer Questio =================================================== n ======= xxxx Xxxx

Das könnte Ihnen auch gefallen