You are on page 1of 816

Data Federator User Guide

BusinessObjects Data Federator XI 3.0


Copyright © 2008 Business Objects, an SAP company. All rights reserved. Business Objects
owns the following U.S. patents, which may cover products that are offered and
licensed by Business Objects: 5,295,243; 5,339,390; 5,555,403; 5,590,250;
5,619,632; 5,632,009; 5,857,205; 5,880,742; 5,883,635; 6,085,202; 6,108,698;
6,247,008; 6,289,352; 6,300,957; 6,377,259; 6,490,593; 6,578,027; 6,581,068;
6,628,312; 6,654,761; 6,768,986; 6,772,409; 6,831,668; 6,882,998; 6,892,189;
6,901,555; 7,089,238; 7,107,266; 7,139,766; 7,178,099; 7,181,435; 7,181,440;
7,194,465; 7,222,130; 7,299,419; 7,320,122 and 7,356,779. Business Objects and
its logos, BusinessObjects, Business Objects Crystal Vision, Business Process
On Demand, BusinessQuery, Cartesis, Crystal Analysis, Crystal Applications,
Crystal Decisions, Crystal Enterprise, Crystal Insider, Crystal Reports, Crystal
Vision, Desktop Intelligence, Inxight and its logos , LinguistX, Star Tree, Table
Lens, ThingFinder, Timewall, Let There Be Light, Metify, NSite, Rapid Marts,
RapidMarts, the Spectrum Design, Web Intelligence, Workmail and Xcelsius are
trademarks or registered trademarks in the United States and/or other countries
of Business Objects and/or affiliated companies. SAP is the trademark or registered
trademark of SAP AG in Germany and in several other countries. All other names
mentioned herein may be trademarks of their respective owners.

Third-party Business Objects products in this release may contain redistributions of software
Contributors licensed from third-party contributors. Some of these individual components may
also be available under alternative licenses. A partial listing of third-party
contributors that have requested or permitted acknowledgments, as well as required
notices, can be found at: http://www.businessobjects.com/thirdparty

2008-10-09
Contents
Chapter 1 Introduction to Data Federator 25
The Data Federator application.................................................................26
An answer to a common business problem.........................................26
Fundamental notions in Data Federator....................................................29
Data Federator Designer: design time.................................................30
Data Federator Query Server: run time................................................30
Important terms....................................................................................30
Data Federator user interface....................................................................31
Overview of the methodology....................................................................33
Adding the targets................................................................................35
Adding the datasources........................................................................35
Mapping datasources to targets...........................................................36
Checking if data passes constraints.....................................................36
Deploying the project............................................................................37

Chapter 2 Starting a Data Federator project 39


Working with Data Federator.....................................................................40
Login and passwords for Data Federator..................................................40
Adding new users......................................................................................40
Starting a project........................................................................................41
Adding a project...................................................................................41
Opening a project.................................................................................41
Deleting Data Federator projects.........................................................42
Closing Data Federator projects...........................................................42
Unlocking projects................................................................................43

Data Federator User Guide 3


Contents

Chapter 3 Creating target tables 45


Managing target tables..............................................................................46
Adding a target table manually.............................................................46
Adding a target table from a DDL script...............................................47
Adding a target table from an existing table.........................................47
Changing the name of a target table....................................................48
Displaying the impact and lineage of target tables...............................48
Details on configuring target table schemas........................................49
Determining the status of a target table.....................................................50
How to read the Impact and lineage pane in Data Federator Designer.....52
Testing targets...........................................................................................53
Testing a target.....................................................................................53
Managing domain tables............................................................................54
Adding a domain table to enumerate values in a target column..........55
Examples of domain tables..................................................................56
Adding a domain table by importing data from a file............................59
Dereferencing a domain table from your target table...........................60
Exporting a domain table as CSV........................................................61
Deleting a domain table........................................................................61
Using domain tables in your target tables.................................................61
Using a domain table as the domain of a column................................62

Chapter 4 Defining sources of data 65


About datasources.....................................................................................66
Datasource user interface....................................................................67
Draft and Final datasources.................................................................67
About configuration resources..............................................................70
Generic and pre-defined datasources..................................................71
Creating database datasources using resources......................................72

4 Data Federator User Guide


Contents

Adding Access datasources.................................................................72


Adding DB2 datasources......................................................................76
Adding Informix datasources................................................................81
Adding MySQL datasources.................................................................86
Adding Oracle datasources..................................................................91
Adding Netezza datasources...............................................................96
Adding Progress datasources............................................................101
Adding SAS datasources...................................................................107
Adding SQL Server datasources........................................................113
Adding Sybase datasources...............................................................119
Adding Sybase IQ datasources..........................................................124
Adding Teradata datasources............................................................128
Creating JDBC datasources from custom resources.........................133
Creating generic database datasources..................................................138
Creating generic JDBC or ODBC datasources..................................138
Managing database datasources............................................................154
Using deployment contexts ...............................................................154
Defining deployment parameters for a project ..................................155
Defining a connection with deployment context parameters .............156
Adding tables to a relational database datasource............................157
Updating the tables of a relational database datasource...................158
Creating text file datasources..................................................................158
About text file formats.........................................................................159
Setting a text file datasource name and description..........................159
Selecting a text data file.....................................................................160
Configuring file extraction parameters...............................................161
Automatically extracting the schema of your datasource table..........168
Indicating a primary key in a text file datasource...............................170
Managing text file datasources................................................................170
Editing the schema of an existing table..............................................170
Using a schema file to define a text file datasource schema.............171

Data Federator User Guide 5


Contents

Generating a schema when a text file has no header row.................171


Defining the schema of a text file datasource manually.....................172
Selecting multiple text files as a datasource .....................................173
Numeric formats used in text files......................................................174
Date formats used in text files............................................................176
Modifying the data extraction parameters of a text file.......................177
Using a remote text file as a datasource............................................178
Creating XML and web service datasources...........................................179
About XML file datasources...............................................................179
Adding an XML file datasource..........................................................179
Choosing and configuring a source file of type XML..........................180
Adding datasource tables to an XML datasource..............................181
About web service datasources.........................................................182
Adding a web service datasource......................................................183
Extracting the available operations from a web service.....................183
Selecting the operations you want to access from a web service......184
Authenticating on a web service datasource......................................185
Authenticating on a server that hosts web services used as
datasources........................................................................................185
Using the SOAP header to pass parameters to web services...........186
Selecting which response elements to convert to tables in a web service
datasource..........................................................................................187
Assigning constant values to parameters of web service operations..188
Assigning dynamic values to parameters of web service operations..188
Propagating values to parameters of web service operations...........189
Managing XML and web service datasources.........................................189
Using the elements and attributes pane.............................................189
Selecting multiple XML files as datasources .....................................199
Using a remote XML file as a datasource..........................................200
Testing web service datasources.......................................................201
Creating remote Query Server datasources............................................202

6 Data Federator User Guide


Contents

Configuring a remote Query Server datasource................................202


Managing datasources............................................................................204
Defining the schema of a datasource.................................................204
Authentication methods for database datasources............................207
Displaying the impact and lineage of datasource tables....................208
Restricting access to columns using input columns...........................209
Changing the source type of a datasource........................................209
Deleting a datasource........................................................................210
Testing and finalizing datasources...........................................................210
Running a query on a datasource......................................................211
Making your datasource final.............................................................212
Editing a final datasource...................................................................213

Chapter 5 Mapping datasources to targets 215


Mapping datasources to targets process overview.................................216
The user interface for mapping..........................................................216
Adding a mapping rule for a target table............................................217
Selecting a datasource table for the mapping rule.............................218
Writing mapping formulas...................................................................219
Determining the status of a mapping.......................................................220
Mapping values using formulas...............................................................222
Mapping formula syntax.....................................................................222
Filling in mapping formulas automatically..........................................223
Setting a constant in a column of a target table.................................224
Testing mapping formulas..................................................................226
Writing aggregate formulas................................................................227
Writing case statement formulas........................................................230
Testing case statement formulas........................................................232
Mapping values to input columns............................................................233
Assigning constant values to input columns using pre-filters.............233
Assigning dynamic values to input columns using table relationships.234

Data Federator User Guide 7


Contents

Propagating values to input columns using input value functions......234


Adding filters to mapping rules................................................................235
The precedence between filters and formulas...................................235
Adding a pre-filter on a column of a datasource table........................236
Editing a pre-filter...............................................................................239
Deleting a pre-filter.............................................................................241
Using lookup tables.................................................................................242
What is a lookup table?......................................................................242
The process of adding a lookup table between columns...................243
Adding a lookup table.........................................................................244
Referencing a datasource table in a lookup table..............................246
Referencing a domain table in a lookup table....................................247
Mapping values between a datasource table and a domain table.....248
Adding a lookup table by importing data from a file...........................249
Dereferencing a domain table from a lookup table............................251
Deleting a lookup table.......................................................................252
Exporting a lookup table as CSV.......................................................252
Using a target as a datasource................................................................252
Managing relationships between datasource tables................................253
The precedence between formulas and relationships........................253
Finding incomplete relationships........................................................254
Adding a relationship..........................................................................256
Editing a relationship..........................................................................259
Deleting a relationship........................................................................260
Choosing a core table........................................................................261
Configuring meanings of table relationships using core tables..........261
Using a domain table to constrain possible values............................263
The process of mapping multiple datasource tables to one target
table....................................................................................................264
Adding multiple datasource tables to a mapping...............................265
Writing mapping formulas when mapping multiple datasource tables.265

8 Data Federator User Guide


Contents

Adding a relationship when mapping multiple datasource tables......267


Interpreting the results of a mapping of multiple datasource tables....270
Combining mappings and case statements.......................................274
Managing a set of mapping rules............................................................276
Viewing all the mapping rules.............................................................276
Opening a mapping rule.....................................................................276
Copying a mapping rule.....................................................................277
Printing a mapping rule......................................................................278
Deleting a mapping rule.....................................................................279
Displaying the impact and lineage of mappings.................................279
Activating and deactivating mapping rules..............................................279
Deactivating a mapping rule...............................................................280
Activating a mapping rule...................................................................280
Testing mappings.....................................................................................280
Testing a mapping rule.......................................................................281
Managing datasource, lookup and domain tables in a mapping rule......282
Adding a table to a mapping rule.......................................................282
Replacing a table in a mapping rule...................................................284
Deleting a table from a mapping rule.................................................286
Viewing the columns of a table in a mapping rule..............................286
Setting the alias of a table in a mapping rule.....................................287
Restricting rows to distinct values......................................................289
Details on functions used in formulas......................................................292

Chapter 6 Managing constraints 293


Testing mapping rules against constraints...............................................294
Defining constraints on a target table......................................................294
Types of constraints...........................................................................294
Defining key constraints for a target table..........................................295
Defining not-null constraints for a target table....................................295
Defining custom constraints on a target table....................................296

Data Federator User Guide 9


Contents

Syntax of constraint formulas.............................................................296


Configuring a constraint check...........................................................297
Checking constraints on a mapping rule.................................................299
The purpose of analyzing constraint violations..................................299
Computing constraint violations.........................................................300
Computing constraint violations for a group of mapping rules...........301
Filtering constraint violations..............................................................302
Marking a mapping rule as validated.................................................303
Viewing constraint violations..............................................................303
The Constraint checks pane...............................................................304
Reports....................................................................................................306

Chapter 7 Managing projects 307


Managing a project and its versions........................................................308
The user interface for projects............................................................308
The life cycle of a project....................................................................309
Editing the configuration of a project..................................................310
Storing the current version of a project..............................................311
Storing the current version of selected target tables..........................312
Downloading a version of a project....................................................313
Loading a version of a project stored on the server...........................314
Loading a version of a project stored on your file system..................315
Including a project in your current project..........................................315
Opening multiple projects...................................................................319
Exporting all projects..........................................................................320
Importing a set of projects..................................................................321
Deploying projects...................................................................................321
Servers on which projects are deployed............................................322
User rights on deployed catalogs and tables.....................................322
Storage of deployed projects..............................................................323
Version control of deployed projects..................................................323

10 Data Federator User Guide


Contents

Deploying a version of a project.........................................................324


Using deployment contexts ...............................................................325
Reference of project deployment options...........................................327

Chapter 8 Managing changes 329


Overview..................................................................................................330
Verifying if changes are valid..............................................................330
Modifying the schema of a final datasource............................................331
Deleting an installed datasource.............................................................333
Modifying a target....................................................................................335
Adding a mapping....................................................................................337
Modifying a mapping................................................................................337
Adding a constraint check........................................................................338
Modifying a constraint check...................................................................338
Modifying a domain table.........................................................................338
Deleting a domain table...........................................................................339
Modifying a lookup table..........................................................................341
Deleting a lookup table............................................................................341

Chapter 9 Introduction to Data Federator Query Server 343


Data Federator Query Server overview...................................................344
Data Federator Query Server architecture..............................................344
How Data Federator Query Server accesses sources of data...........345
Key functions of Data Federator Administrator..................................347
Security recommendations......................................................................348

Chapter 10 Connecting to Data Federator Query Server using JDBC/ODBC


drivers 349
Connecting to Data Federator Query Server using JDBC.......................350
Installing the JDBC driver with the Data Federator installer...............350

Data Federator User Guide 11


Contents

Installing the JDBC driver without the Data Federator installer..........351


Connecting to the server using JDBC................................................352
Example Java code for connecting to Data Federator Query Server using
JDBC..................................................................................................353
Connecting to Data Federator Query Server using ODBC......................354
Installing the ODBC driver for Data Federator (Windows only)..........354
Connecting to the server using ODBC...............................................355
Using ODBC when your application already uses another JVM........357
Accessing data........................................................................................357
JDBC URL syntax....................................................................................358
Parameters in the JDBC connection URL..........................................361
JDBC and ODBC Limitations...................................................................378
JDBC and ODBC Limitations.............................................................378
SQL Constraints......................................................................................380

Chapter 11 Using Data Federator Administrator 383


Data Federator Administrator overview...................................................384
Starting Data Federator Administrator.....................................................384
To end your Data Federator Administrator session.................................384
Server configuration.................................................................................384
Exploring the user interface ....................................................................385
Objects tab.........................................................................................385
My Query Tool tab..............................................................................386
Administration tab...............................................................................387
The Server Status menu item.............................................................389
The Connector Settings menu item....................................................390
The User Rights menu item................................................................391
The Configuration menu item.............................................................392
The Statistics menu item....................................................................393
Managing statistics with Data Federator Administrator...........................394
Using the Statistics tab to refresh statistics automatically..................395

12 Data Federator User Guide


Contents

Selecting the tables for which you want to display statistics..............395


Recording statistics that Query Server recently requested................395
List of options for the Global Refresh of Statistics pane....................396
Managing queries with Data Federator Administrator.............................397
Executing SQL queries using the My Query Tool tab.........................397

Chapter 12 Configuring connectors to sources of data 399


About connectors in Data Federator........................................................400
Configuring Access connectors...............................................................400
Configuring Access connectors..........................................................400
Configuring DB2 connectors....................................................................401
Configuring DB2 connectors..............................................................401
Configuring Informix connectors..............................................................402
Supported versions of Informix...........................................................402
Configuring Informix connectors........................................................402
List of Informix resource properties....................................................403
Configuring MySQL connectors...............................................................410
Configuring MySQL connectors.........................................................410
Specific collation parameters for MySQL...........................................411
Configuring Oracle connectors................................................................412
Configuring Oracle connectors...........................................................412
Specific collation parameters for Oracle............................................412
How Data Federator transforms wildcards in names of Oracle tables.413
Configuring Netezza connectors.............................................................414
Supported versions of Netezza..........................................................414
Configuring Netezza connectors........................................................414
List of Netezza resource properties...................................................415
Configuring Progress connectors............................................................422
Configuring connectors for Progress..................................................422
Installing OEM SequeLink Server for Progress connections.............423
Configuring middleware for Progress connections.............................423

Data Federator User Guide 13


Contents

Configuring SAS connectors...................................................................426


Configuring connectors for SAS.........................................................426
Supported versions of SAS................................................................427
Installing drivers for SAS connections................................................427
Optimizing SAS queries by ordering tables in the from clause by their
cardinality...........................................................................................428
List of JDBC resource properties for SAS..........................................428
Configuring SQL Server connectors........................................................430
Configuring SQL Server connectors..................................................430
Specific collation parameters for SQL Server....................................431
Configuring Sybase connectors...............................................................432
Supported versions of Sybase...........................................................432
Configuring Sybase connectors.........................................................432
Installing middleware to let Data Federator connect to Sybase.........433
List of Sybase resource properties.....................................................434
Configuring Sybase IQ connectors..........................................................442
Supported versions of Sybase IQ......................................................442
Configuring Sybase IQ connectors....................................................442
List of Sybase IQ resource properties................................................443
Configuring Teradata connectors.............................................................451
Supported versions of Teradata.........................................................451
Configuring Teradata connectors.......................................................451
List of Teradata resource properties...................................................452
Default values of capabilities in connectors.............................................459
Configuring connectors that use JDBC...................................................460
Pointing a resource to an existing JDBC driver..................................460
List of JDBC resource properties.......................................................461
List of JDBC resource properties for connection pools......................472
List of common JDBC classes............................................................475
List of pre-defined JDBC URL templates...........................................476
transactionIsolation property..............................................................478

14 Data Federator User Guide


Contents

urlTemplate.........................................................................................479
Configuring connectors to web services..................................................479
List of resource properties for web service connectors......................480
Managing resources and properties of connectors.................................483
Managing resources using Data Federator Administrator..................483
Creating and configuring a resource using Data Federator
Administrator......................................................................................486
Copying a resource using Data Federator Administrator...................488
List of pre-defined resources..............................................................489
Managing resources using SQL.........................................................491
Creating a resource using SQL..........................................................491
Deleting a resource using SQL..........................................................492
Modifying a resource property using SQL..........................................493
Deleting a resource property using SQL............................................494
System tables for resource management..........................................494
Collation in Data Federator......................................................................495
Supported Collations in Data Federator.............................................496
Setting string sorting and string comparison behavior for Data Federator
SQL queries.......................................................................................497
How Data Federator decides how to push queries to sources when using
binary collation...................................................................................500

Chapter 13 Managing user accounts and roles 503


About user accounts, roles, and privileges..............................................504
About user accounts...........................................................................504
Creating a Data Federator administrator user account...........................505
Creating a Data Federator Designer user account..................................506
Creating a Data Federator Query Server user account...........................506
Managing user accounts with Data Federator Administrator...................507
Properties of user accounts.....................................................................511
Managing roles with Data Federator Administrator.................................511

Data Federator User Guide 15


Contents

Granting privileges to a user account or role...........................................514


Managing privileges with Data Federator Administrator..........................514
Managing user accounts with SQL statements.......................................516
Creating a user account with SQL.....................................................516
Dropping a user account with SQL....................................................516
Modifying a user password with SQL.................................................517
Modifying properties of a user account with SQL...............................517
Listing user accounts using SQL........................................................517
Managing privileges using SQL statements ...........................................518
About grantees...................................................................................519
Granting a privilege with SQL.............................................................519
Revoking a privilege with SQL...........................................................520
Checking a privilege with SQL...........................................................520
Verifying privileges using system tables.............................................521
List of privileges..................................................................................522
Managing roles with SQL statements......................................................523
Creating a role with SQL....................................................................523
Dropping a role with SQL...................................................................524
Granting roles with SQL.....................................................................524
Verifying roles using system tables....................................................524
Managing login domains..........................................................................525
Adding a login domain........................................................................525
Modifying a login domain description.................................................525
Deleting login domains.......................................................................526
Mapping user accounts to login domains...........................................526
System tables for user management.......................................................527
Using a system table to check the properties of a user.....................527

Chapter 14 Controlling query execution 529


Query execution overview.......................................................................530
Auditing and monitoring the system........................................................530

16 Data Federator User Guide


Contents

Viewing target tables..........................................................................530


Viewing datasource tables.................................................................530
Querying metadata.............................................................................532
Cancelling a query...................................................................................534
Cancelling a query..............................................................................534
Cancelling all running queries............................................................535
Data types................................................................................................535
Configuring the precision and scale of DECIMAL values returned from
Data Federator Query Server.............................................................535
Viewing system configuration..................................................................543
Statistics on query execution..............................................................543
Statistics on the buffer manager.........................................................543
Queries registered for buffer manager...............................................543
Detailed buffer allocation for operators..............................................543
Statistics on wrapper management....................................................544

Chapter 15 Optimizing queries 545


Tuning the performance of Data Federator Query Server.......................546
Updating statistics..............................................................................546
Optimizing access to the swap file.....................................................546
Optimizing memory............................................................................547
Operators that consume memory.......................................................548
Guidelines for using system and session parameters to optimize queries
on large tables....................................................................................548

Chapter 16 Managing system and session parameters 553


About system and session parameters...................................................554
Managing parameters using Data Federator Administrator.....................554
Managing parameters using SQL statements.........................................556
List of parameters....................................................................................557
Configuring the working directory............................................................573

Data Federator User Guide 17


Contents

Chapter 17 Backing up and restoring data 575


About backing up and restoring data.......................................................576
Starting the Data Federator Backup and Restore tool.............................576
Starting the Backup and Restore tool................................................576
Backing up your Data Federator data......................................................578
Restoring your Data Federator data........................................................578

Chapter 18 Deploying Data Federator servers 581


About deploying Data Federator servers.................................................582
Deploying a project on a single remote Query Server.............................582
Possibilities for deploying a project on a single remote instance of Query
Server.................................................................................................583
Configuring Data Federator Designer to connect to a remote Query
Server.................................................................................................584
Sharing Query Server between multiple instances of Designer.........585
Deploying a project on a cluster of remote instances of Query Server....586
Possibilities for deploying a project on a cluster of remote instances of
Query Server......................................................................................587
Starting and stopping Connection Dispatcher.........................................589
Starting Connection Dispatcher when Data Federator Windows Services
are installed........................................................................................589
Starting Connection Dispatcher when Data Federator Windows Services
are not installed..................................................................................590
Starting Connection Dispatcher on AIX, Solaris or Linux...................590
Shutting down Connection Dispatcher when Data Federator Windows
Services are installed.........................................................................591
Shutting down Connection Dispatcher when Data Federator Windows
Services are not installed...................................................................591
Shutting down Connection Dispatcher on AIX, Solaris or Linux........592
Configuring Connection Dispatcher.........................................................593
Setting parameters for Connection Dispatcher..................................593

18 Data Federator User Guide


Contents

Guidelines for using Connection Dispatcher parameters to configure


validity times of references to servers................................................593
Configuring logging for Connection Dispatcher..................................595
Parameters for Connection Dispatcher..............................................596
Managing the set of servers for Connection Dispatcher....................599
Format of the Connection Dispatcher servers configuration file........600
Configuring fault tolerance for Data Federator........................................601

Chapter 19 Data Federator Designer reference 603


Using data types and constants in Data Federator Designer..................604
Date formats in Data Federator Designer................................................607
Data extraction parameters for text files..................................................607
Formats of files used to define a schema................................................608
Running a query to test your configuration..............................................614
Query configuration.................................................................................615
Printing a data sheet................................................................................617
Inserting rows in tables............................................................................618
The syntax of filter formulas.....................................................................619
The syntax of case statement formulas...................................................620
The syntax of relationship formulas.........................................................621

Chapter 20 Function reference 623


Function reference...................................................................................624
Aggregate functions...........................................................................624
Numeric functions...............................................................................628
Date/Time functions...........................................................................637
String functions...................................................................................647
System functions................................................................................667
Conversion functions..........................................................................671

Data Federator User Guide 19


Contents

Chapter 21 SQL syntax reference 687


SQL syntax overview...............................................................................688
Data Federator Query Server query language........................................688
Identifiers and naming conventions....................................................688
Data Federator data types..................................................................696
Expressions........................................................................................701
Comments..........................................................................................706
Statements.........................................................................................706
Data Federator SQL grammar.................................................................713
Syntax key..........................................................................................714
Grammar for the SELECT clause.......................................................715
Grammar for managing users............................................................721
Grammar for managing resources.....................................................725

Chapter 22 System table reference 727


System table reference............................................................................728
Metadata system tables.....................................................................729
Function system tables.......................................................................733
User system tables.............................................................................735
Resource system tables.....................................................................739
Other system tables...........................................................................740

Chapter 23 Stored procedure reference 743


List of stored procedures.........................................................................744
getTables............................................................................................744
getCatalogs........................................................................................746
getKeys..............................................................................................746
getFunctionsSignatures......................................................................748
getColumns........................................................................................750

20 Data Federator User Guide


Contents

getSchemas.......................................................................................751
getForeignKeys..................................................................................752
refreshTableCardinality.......................................................................754
clearMetrics........................................................................................755
addLoginDomain................................................................................755
delLoginDomains................................................................................756
alterLoginDomain...............................................................................756
getLoginDomains...............................................................................757
addCredential.....................................................................................757
delCredentials....................................................................................758
alterCredential....................................................................................760
getCredentials....................................................................................761
Using patterns in stored procedures........................................................762

Chapter 24 Glossary 765


Glossary...................................................................................................766
Terms and descriptions............................................................................766

Chapter 25 Troubleshooting 777


Installation................................................................................................778
Installing from remote machine..........................................................778
Input line is too long: error on Windows 2000....................................778
Input line is too long: error on Windows 2000....................................779
McAfee's On-Access Scan.................................................................779
Errors like missing method due to uncleared browser cache after
installation..........................................................................................779
Finding the Connection Dispatcher servers configuration file when running
Connection Dispatcher as a Windows service...................................780
Datasources.............................................................................................781
File on SMB share..............................................................................781
Separators not working......................................................................781

Data Federator User Guide 21


Contents

Cannot edit an existing datasource....................................................782


Connection parameters......................................................................782
Targets.....................................................................................................783
Cannot see any datasources in targets windows...............................783
Mappings.................................................................................................783
Cannot reference lookup table in existing mapping...........................783
Error in formula that uses a BOOLEAN value....................................784
Source relationships introduce cycles in the graph............................784
Table used in mapping rule is no longer available.............................785
Table added to the mapping rule should be core...............................786
Table added to the mapping rule should not be core.........................786
At least one table should be core.......................................................787
Domain tables..........................................................................................787
Cannot remove domain table.............................................................787
Data Federator Designer.........................................................................788
Cannot select a table..........................................................................788
Cannot find column names.................................................................788
Cannot use column or table after changing the source......................789
Data Federator connectors......................................................................789
Exception for entity expansion limit....................................................789
On Teradata V2R6, error Datatype mismatch in the Then/Else
expression..........................................................................................790
Accessing data........................................................................................790
Target tables not accessible on Data Federator Query Server..........790
Target tables not accessible from deployed project on Data Federator
Query Server......................................................................................791
Cannot access CSV files on a remote machine using a generic ODBC
connection..........................................................................................791
Data Federator services..........................................................................792
Starting and stopping services...........................................................792
Networking...............................................................................................792

22 Data Federator User Guide


Contents

Network Connections.........................................................................792

Chapter 26 Data Federator logs 795


About Data Federator logs.......................................................................796
Data Federator Designer logs.................................................................796
Data Federator Query Server logs..........................................................796
Activating Data Federator Query Server logs....................................796

Appendix A Get More Help 799

Index 803

Data Federator User Guide 23


Contents

24 Data Federator User Guide


Introduction to Data
Federator

1
1 Introduction to Data Federator
The Data Federator application

The Data Federator application


Data Federator is an Enterprise Information Integration (EII) application that
provides a uniform, coherent and integrated view of distributed and
heterogeneous data sources. The data sources can be spread across a
network, managed by different data management systems and administered
under different areas of the organization.

This tool differs in its architecture to ETL (Extract, Transform, Load) tools in
that the data it manages is not replicated in another system but optimized in
the form of virtual data tables. The virtual database is a collection of relational
tables that are manipulated with SQL but do not hold stored data.

Data Federator allows you to consolidate your various data sources into one
coherent set of target tables. From these consolidated, virtual, target tables,
reporting tools can perform queries and be confident that the data are reliable,
trustworthy and up-to-date. For example, you can create a universe using
BusinessObjects Designer or create a query directly against the virtual target
tables using Crystal Reports.

An answer to a common business problem

Most businesses maintain several data sources that are spread across
different departments or sites. Often, duplicate information appears within
the various data sources but is cataloged in such a way that makes it difficult
to use the data to make strategic decisions or perform statistical analysis.

The following diagram illustrates the classic approach to consolidating data.

26 Data Federator User Guide


Introduction to Data Federator
The Data Federator application 1

What are the challenges?

When your task involves consolidating several disparate data sources, you
most likely face the following challenges.
• simplicity and productivity - you want to develop a solution once
• quality control - you want to ensure that the consolidated data can be
trusted and is correct
• performance - you want to make sure that access to the data is optimized
to produce results quickly
• maintenance - you want to develop a solution that requires little or no
maintenance as new source of data are added or as existing sources
change

How can the problem be defined?

When faced with the above challenges, you can define the problem in terms
of the following needs.
• need to retrieve the content of each source
• need to aggregate the information relative to the same customer

Data Federator User Guide 27


1 Introduction to Data Federator
The Data Federator application

• need to reconcile or transform the data to follow a uniform representation

How does Data Federator solve this problem?

The following diagrams illustrate how Data Federator addresses the above
needs.

Data Federator operates between your sources of data and your applications.
The communication between the data and the Data Federator Query Server
takes place by means of "connectors." In turn, external applications query
data from Data Federator Query Server by using SQL.

The following diagram shows where Data Federator operates in relation to


your sources of data and your applications.

28 Data Federator User Guide


Introduction to Data Federator
Fundamental notions in Data Federator 1

Internally, Data Federator uses virtual tables and mappings to present the
data from your sources of data in a single virtual form that is accessible to
and optimized for your applications.

The following diagram shows the internal operation of Data Federator and
how it can aggregate your sources of data into a form usable by your
applications.

Fundamental notions in Data Federator


You work with Data Federator in two phases:
• design time
• run time

Data Federator User Guide 29


1 Introduction to Data Federator
Fundamental notions in Data Federator

Design time is the phase of defining a representation of your data, and run
time is the phase where you use that representation to query your data.

Data Federator Designer: design time

At design time, you use Data Federator Designer to define a data model,
composed of datasource tables and target tables. Mapping rules, domain
tables and lookup tables help you to achieve this goal.

The outcome of this phase is a mapping from your datasources to your


targets. Your target tables are virtual tables that live inside Data Federator,
and they can be queried at run time.

Data Federator Query Server: run time

Once your data model and its associated metadata are in place, your
applications can query these virtual tables as a single source of data. Your
applications connect to and launch queries against Data Federator Query
Server.

Behind the scenes at run time, the Data Federator Query Server knows how
to query your distributed data sources optimally to reduce data transfers.

Important terms

The following table lists some of the fundamental terms when working with
Data Federator. For a full list of definitions, see Glossary on page 766.

Term Description

This is the database that you create


using Data Federator Designer: it
target consolidates the data of multiple
sources into a form that can be used
by your applications.

30 Data Federator User Guide


Introduction to Data Federator
Data Federator user interface 1
Term Description

A target table is one of the tables that


target table
you define in your target.

A datasource is representation of a
source of your data, in tabular form.
datasource
You define a datasource in Data
Federator Designer.

A connector is a file that defines your


sources of data in a format that Data
Federator Query Server understands.
connector When you use Data Federator De-
signer to add a datasource, the defi-
nition that you make is stored in a
configuration file for a connector.

This is a table that typically maps


values from one column to a different
lookup table column. You define it in Data Feder-
ator Designer, and you use it when
adding mappings.

A mapping is a set of rules that define


mapping a correspondence between a set of
datasource tables and a target table.

Data Federator user interface


The following diagram shows the layout and elements of a Data Federator
Designer window.

Data Federator User Guide 31


1 Introduction to Data Federator
Data Federator user interface

Data Federator Designer maintains a consistent interface for all the


components of a Data Federator project.

The main components of the Data Federator Designer user interface are:
• (A) the breadcrumb, showing you the position of the current window in
the tree view
• (B) the tabs, where you navigate among your open projects
• (C) the project toolbar, where you add, import or export projects
• (D) the tree view, where you navigate among the components in your
project
• (E) the main view, where you define your components
• (F) the Save button, which saves the changes you made on the current
window
• (G) the Open button, which lets you open a project from the project
Configuration window
• (H) the Reset button, which resets the changes you made on the current
window

32 Data Federator User Guide


Introduction to Data Federator
Overview of the methodology 1
Overview of the methodology
This section introduces the methodology that you can follow to work with
Data Federator effectively.
You complete the following steps when working with Data Federator.
1. Add the targets.
2. Add the datasources.
3. Map the datasources to the targets.
4. Check the target data against constraints.
5. Deploy the project.

The following diagram summarizes steps 1-3 above. These steps represent
the construction phase in Data Federator Designer, at the end of which Data
Federator understands your source data and can present it as a federated
view.

Data Federator User Guide 33


1 Introduction to Data Federator
Overview of the methodology

34 Data Federator User Guide


Introduction to Data Federator
Overview of the methodology 1

Adding the targets

Adding the targets is a matter of designing the schemas of the tables that
your applications will query.

This design is driven by the needs of your applications. You define the target
schema by examining what data your applications require, and by
implementing this schema as a target table in Data Federator Designer.

Related Topics
• Managing target tables on page 46

Adding the datasources

In Data Federator, you reference your existing sources of data by adding


"datasources".

Data Federator accepts database management systems and CSV files as


sources of data. These sources can be located on different servers at different
locations and use different protocols for access.

Depending on the type of source, you define the data access system in which
it is stored, you select the capabilities of the data access system, or you
describe the data extraction parameters if your source is a text file.

Once you have referenced either a text or database system as a source,


Data Federator names it a "datasource". The term "datasource" refers to
Data Federator representation of actual source of data. It is this abstraction
that lets Data Federator understand the data and perform real-time queries
on it.

Related Topics
• About datasources on page 66
• Creating text file datasources on page 158
• Creating generic JDBC or ODBC datasources on page 138
• Configuring a remote Query Server datasource on page 202

Data Federator User Guide 35


1 Introduction to Data Federator
Overview of the methodology

Mapping datasources to targets

The mapping phase links your datasources to your targets.


During the mapping phase, you can use filters, relationships and formulas
to convert values from the ones in your datasources to the ones expected
by your targets.

Mapping formulas let you make computations on existing data, in order to


convert it to its target form. Data Federator Designer lets you add additional
data that does not exist in your datasource by creating lookup tables and
domain tables. You can also describe additional logic by adding filters and
relationships between datasource tables.

Once the mappings are in place, Data Federator Query Server knows how
to transform, in real-time, the data in your datasources into the form required
by your targets.

Related Topics
• Mapping datasources to targets process overview on page 216

Checking if data passes constraints

Once your mappings are defined, Data Federator Designer helps you check
the validity of the data that results from the mappings.

Data Federator Designer defines several default constraint checks, such as


checking that a primary key column never produces duplicate values, or
checking that a column marked as "NOT-NULL" does not have any NULLS
in it. You can also add custom constraints.

Once your constraints are defined, Data Federator Designer lets you check
each mapping, and mark the ones that are producing valid results, in order
to refine the rules so that they are ready for production.

Related Topics
• Testing mapping rules against constraints on page 294
• Defining constraints on a target table on page 294

36 Data Federator User Guide


Introduction to Data Federator
Overview of the methodology 1

Deploying the project

When your mappings are tested in Data Federator Designer, you can deploy
your project on Data Federator Query Server.

When you deploy your project, its tables are usable by applications that send
queries to Data Federator Query Server.

Related Topics
• Managing a project and its versions on page 308

Data Federator User Guide 37


1 Introduction to Data Federator
Overview of the methodology

38 Data Federator User Guide


Starting a Data Federator
project

2
2 Starting a Data Federator project
Working with Data Federator

Working with Data Federator


To start working with Data Federator, you create a "project" in Data Federator
Designer.

A "project" is a workspace containing all the components used by Data


Federator: targets, datasources, mappings, lookup tables, domain tables,
and constraint checks. Each project has versions, and each version is either:
• in development
• deployed

While you work on a project, it is considered to be in development. When


you are ready to put your work into production, you deploy the project. Once
you deploy a project, it becomes a catalog on Data Federator Query Server,
and other applications can send queries to it.

Login and passwords for Data Federator


The default user name is sysadmin.

The default password is sysadmin.

You should use Data Federator Administrator to change the login parameters
after installation.

Related Topics
• Starting Data Federator Administrator on page 384

Adding new users


You can add new users to Data Federator using Data Federator Administrator.

Related Topics
• Data Federator Administrator overview on page 384
• Starting Data Federator Administrator on page 384

40 Data Federator User Guide


Starting a Data Federator project
Starting a project 2
Starting a project
To start a project in Data Federator Designer, you add a project and then
open the project.

Related Topics
• Adding a project on page 41
• Opening a project on page 41
• Managing a project and its versions on page 308

Adding a project

To define targets, datasources and mappings, you must add a project to the
Data Federator list of projects.

When you add a project, it appears in the Data Federator list of projects, and
you can switch between different projects.
1. At the top of the window, click Projects.
2. Click Add project.
The New project window appears.
3. Enter a name and description for the project in the Project name and
Description fields, and click Save.
Data Federator adds the project to the list of projects.

Opening a project

You can only open a project that is not locked by another user account. If it
is locked, wait for the other user account to unlock the project, or wait until
the other user account's session expires.

In order to work on your targets, datasources and mappings, you must open
a project. You open projects from the Projects tab.
1. At the top of the window, click the Projects tab.
2. In the tree list, click your-project-name.
The "Configuration" window appears.

Data Federator User Guide 41


2 Starting a Data Federator project
Starting a project

3. Click Open.
The your-project-name tab appears.

4. Click the your-project-name tab.


The latest version of your project opens.

Once your project is open, you can add targets, datasources and
mappings to it.

Related Topics
• Unlocking projects on page 43
• Managing target tables on page 46
• About datasources on page 66
• Creating database datasources using resources on page 72
• Mapping datasources to targets process overview on page 216
• Opening multiple projects on page 319

Deleting Data Federator projects

You can only open a project that is not locked by another user account. If it
is locked, wait for the other user account to unlock the project, or wait until
the other user account's session expires.
1. Click the Projects tab.
2. Click the Delete this project icon.

Related Topics
• Unlocking projects on page 43
• Managing a project and its versions on page 308

Closing Data Federator projects


1. Click the your-project-name tab.
2. Click the Close this project icon.

42 Data Federator User Guide


Starting a Data Federator project
Starting a project 2

The project closes and becomes unlocked for other user accounts.

Related Topics
• Unlocking projects on page 43
• Managing a project and its versions on page 308

Unlocking projects

When you open a project, Data Federator Designer locks it. When other user
accounts try to access the project, Data Federator refuses, and indicates
that it is locked by your user account.

To unlock a project, the user account that locked it must log in and close the
project.

If the password for the user account that locked the project is lost, the system
administrator can reset it. You can the log in using the user account that
locked the project, and unlock it.

If you open the same project on two machines with the same user account,
the last machine will lock the project. If you return to the first machine, the
project will be open, but you will not be able to save your changes. In this
case, you will have to decide if you want to keep the changes you made on
the first machine or on the second machine.

Data Federator also automatically unlocks the project after the session
timeout value expires. This value is set to 30 minutes.

Related Topics
• Closing Data Federator projects on page 42
• Managing a project and its versions on page 308

Data Federator User Guide 43


2 Starting a Data Federator project
Starting a project

44 Data Federator User Guide


Creating target tables

3
3 Creating target tables
Managing target tables

Managing target tables


Target tables are the Data Federator tables that you create to present data
in the correct format to your external applications.

You define target tables in the Data Federator Designer user interface. Once
you have defined the target tables and deployed your project, the Data
Federator server (Data Federator Query Server) exposes your tables to your
other applications.

Adding a target table manually


1. Select Add a new target table from the Add drop-down arrow.
The New target table window appears.

2. Type a name for the table in the Table name field, and a description in
the Description field.
3. Click Add columns, then click the number of columns that you want to
add.
Empty rows appear in the Table schema pane. Each row lets you define
one column.

You can add rows repeatedly.

4. Fill in each row in the Table schema pane with the name and type of the
column that you want to add.
5. Click Save.
Your target table appears in the Target tables tree list.

Related Topics
• Inserting rows in tables on page 618
• Adding a mapping rule for a target table on page 217
• Using data types and constants in Data Federator Designer on page 604

46 Data Federator User Guide


Creating target tables
Managing target tables 3

Adding a target table from a DDL script

This procedure shows you how to add a target table by opening a file that
contains a DDL script.
1. Select Add target tables from DDL script from the Add drop-down
arrow.
The Import a DDL script window appears.

2. Import a DDL script in one of the following ways:


• Click Import a DDL script, then click Browse and select a file that
contains a DDL script that defines a table.
• Click Manual input, then type in a DDL script in the text box.

3. Click Save.
Data Federator Designer executes your DDL script and adds a new table.

Related Topics
• Adding a mapping rule for a target table on page 217
• Formats of files used to define a schema on page 608

Adding a target table from an existing table

This procedure shows you how to create a new target table by copying an
existing table or datasource.
1. Select Add target table from existing table from the Add drop-down
arrow.
The Add target table from exising table window appears.

2. Expand the Tables tree list and select the target table to be added.
The name of the selected table appears in the Replace with table field.
3. Click the table that you want to use.
The name of the selected table appears in the Selected table field, with
copyOf_your-table-name in the New target table's name field.

Data Federator User Guide 47


3 Creating target tables
Managing target tables

4. If you want to create a default mapping rule for your new target table,
select the Create default mapping rule check box.
A default mapping rule maps each column of your new table to its
corresponding column in the original table.
5. Click Save.
The Target tables window is displayed showing all created target tables.
6. Click copyOf_your-table-name in the tree list.
The copyOf_your-table-name window appears, showing the columns
copied from your original table to your new target table.

7. Modify the columns as required, and click Save when done.

Related Topics
• Adding a mapping rule for a target table on page 217

Changing the name of a target table

You can change the name of a target table from the Target tables > your-
target-table-name window.
1. In the tree list, click your-target-table-name.
2. In the General pane, type the new name of your target table.
3. Click Save.
The name of the target table changes.

Displaying the impact and lineage of target tables


1. Open your target table.
2. Click Impact and Lineage.
The Impact and lineage pane for your target table expands and appears.

Related Topics
• How to read the Impact and lineage pane in Data Federator Designer on
page 52

48 Data Federator User Guide


Creating target tables
Managing target tables 3

Details on configuring target table schemas

This section describes the options you have when you are defining the
schema of a target table. You can use this information while adding a target
table manually.

Table 3-1: Description of the Table schema pane

Option Description

allows you to select a row, if you want


to move it or add a new row before it
Select
(see Inserting rows in tables on
page 618)

lets you enter the name of a column


Column name
of the target table

the data type of the column (see Us-


Type ing data types and constants in Data
Federator Designer on page 604)

if the data type is enumerated, this


box lets you choose the domain table
Domain table that contains the allowed values for
this column (see Managing domain
tables on page 54)

Key icon
specifies if the column is the key, or
part of the key, of the target table

Data Federator User Guide 49


3 Creating target tables
Determining the status of a target table

Option Description

specifies if the values in this column


Not-null
must not be NULL

Input column icon if checked, Data Federator refuses


to answer queries on this table unless
the querying application supplies a
value for this column

Description icon
allows you to enter a description of the
column

Delete icon

deletes the row

Related Topics
• Adding a target table manually on page 46

Determining the status of a target table


Data Federator displays the current status of each of your targets. You can
use this status to learn if you have entered all of the information that Data
Federator needs to use the target.
Each target goes through the statuses:
• incomplete

(Data Federator does not show this status in the interface. All new targets
are put in this status.)

50 Data Federator User Guide


Creating target tables
Determining the status of a target table 3
• mapped

The status is shown in the Target tables > your-target-table-


name>your-mapping-rule-name window.

This table shows what to do for each status of the target life cycle.

The status... means... you can do this...

Deactivate the mapping


Not all of the active rules.
incomplete mapping rules in the
target are complete. Make the active map-
ping rules complete.

All of the active mapping • Test the mapping


mapped rules in the target are rules.
complete.

Related Topics
• Deactivating a mapping rule on page 280
• Mapping values using formulas on page 222
• Testing mappings on page 280
• Deploying a version of a project on page 324

Data Federator User Guide 51


3 Creating target tables
How to read the Impact and lineage pane in Data Federator Designer

How to read the Impact and lineage pane


in Data Federator Designer
Table 3-3: How to read the Impact and lineage pane in Data Federator Designer

Element Description

Each box represents a component of


your Data Federator Designer
project.
boxes
A box can be a target table, a final
datasource table, a mapping rule, a
domain table or a lookup table.

Lineage tab

On the Lineage tab, the arrows show


where data comes from.
arrows Each arrow points to the component
from which the first component gets
its data.

Impact tab

On the Impact tab, the arrows show


where data goes.
arrows Each arrow points to the component
to which the first component provides
data.

52 Data Federator User Guide


Creating target tables
Testing targets 3
Testing targets
To test a target, you must verify if the information you entered allows Data
Federator to correctly populate the target tables.
You can encounter the following problems:
• You have written mapping formula that maps the wrong value.
• Your mapping formulas do not result in sufficient information for your
target columns.
• Your mapping formulas result in null values in columns that must not be
NULL.

Data Federator lets you test a target by using the Target table test tool pane.

Testing a target

The target must have the status "mapped" (see Determining the status of a
target table on page 50).

You can run a query on a target to test that all of its mapping rules are
mapping values correctly and consistently.
1. In the tree list, click your-target-table-name.
The Target tables > your-target-table-name window appears.

2. In the Target table test tool pane, click View data to see the query
results.
For details on running the query, see Running a query to test your
configuration on page 614.

For details on printing the results of the query, see Printing a data sheet
on page 617.
Data Federator displays the data in columns in the Data sheet frame.
3. Verify that the values appear correctly.
Otherwise, try adjusting the mapping rules in the target again.

Data Federator User Guide 53


3 Creating target tables
Managing domain tables

Example tests to run on a mapping rule

Tip:
Example tests to perform on your mapping rule
• Fetch the first 100 rows.

Run a query, as in Testing a mapping rule on page 281, and select the
Show total number of rows only check box.

The number of rows will appear above the query results.


• Fetch a single row.

For example, if you have a target table with a primary key of client_id in
the range 6000000-6009999, type:

client_id=6000114

in the Filter box.

Click View data, and verify the value of each column with the data in your
datasource table.
• Verify that the primary key columns are never NULL.

Type the formula:

client_id <> NULL

If any of the returned columns are NULL, verify that your mapping rule
does not insert NULL values.

Managing domain tables


In Data Federator, domain tables are tables that have the following properties:
• Like datasource tables, domain tables hold columns of data.
• Unlike datasource tables, the data in a domain table is stored on the Data
Federator Query Server.

54 Data Federator User Guide


Creating target tables
Managing domain tables 3
• Unlike datasource tables, you can use a domain table to create an
enumeration to be used in a target table (see Using a domain table as
the domain of a column on page 62).
• Domain tables support up to 5000 rows.
• You can combine a domain table with a lookup table to map the values
in a datasource column to the values in a domain table (see Using lookup
tables on page 242).

You create domain tables when you want make an enumeration available
for a column in one of your target tables.

You can also use a domain table to constrain the values in the column of a
target table. See Using a domain table to constrain possible values on
page 263.

Adding a domain table to enumerate values in a


target column

The following procedure is an example of a domain table that you can use
to enumerate a list of values for a column called marital_status. The list
in this example contains a code for each marital status.

In this example, the list contains:


• SE (to represent single)
• MD (married)
• DD (divorced)
• WD (widowed)

1. Click Add > Add domain table.


The "New domain table" window appears.

2. In the Table name field, type a name for your new domain table.
3. In the "Table schema" pane, click Add columns, then click 1 column to
add one column.
One empty row appears in the "Table schema" pane.

4. Complete the row with the following values:

Data Federator User Guide 55


3 Creating target tables
Managing domain tables

• In the Column name box, type marital_status.


• In the Type box, type String.

In the key column (key icon ), select the key check box.

5. Click Save.
The your-domain-table-name window appears.

6. In the "Table contents" pane, click Add, then click Add again.
The "Add rows" window appears, showing one empty row with the columns
that you defined.

7. Click Add rows, then click 3 rows to add three more rows.
8. In the field that you named marital_status, enter the values:
• SE

• MD

• DD

• WD

9. Click Save.
The "Update report" window appears.
10. Click Close.
The your-domain-table-name window appears, showing your new
table with the values you entered. You can now use this domain table to
define a set of values for a column in a target table.

Related Topics
• Using data types and constants in Data Federator Designer on page 604

Examples of domain tables

This section shows some examples of domain tables that you can use in
different cases.

56 Data Federator User Guide


Creating target tables
Managing domain tables 3
Example: Single-column domain table used as an enumeration
You can use this type of table to enumerate the values in the column of a
target table (see Using a domain table as the domain of a column on
page 62)

marital_status

SE

MD

DD

WD

Example: Two-column domain table used as an enumeration, with


descriptions
You can use this type of table to enumerate the values in the column of a
target table, and add a description to each value. You can use the
descriptions to make the corresponding values easier to remember.

See Using a domain table as the domain of a column on page 62.

marital_status marital_status_description

SE single

MD married

Data Federator User Guide 57


3 Creating target tables
Managing domain tables

marital_status marital_status_description

DD divorced

WD widowed

Example: Four-column domain table with a relationship between the


columns
You can use this type of table in the following situation:
• You want to enumerate the values of one column of a target table.
• The column is related to another column, and you want to represent this
relationship.

In this example, you could use department_code as the domain of a column


called "department" in your target table, and you could populate the first
column called "division" based on the value of department_code.

See Using a domain table as the domain of a column on page 62.

department_
division_code_ department_
division_code code_ descrip-
description code
tion

HR human resources D101 benefits

HR human resources D102 new hires

research and de-


RD D111 servers
velopment

58 Data Federator User Guide


Creating target tables
Managing domain tables 3

department_
division_code_ department_
division_code code_ descrip-
description code
tion

research and de-


RD D112 workstations
velopment

MKTG marketing D121 North America

SLS sales D231 North America

PRCH purchasing D241 Global

Adding a domain table by importing data from a file

• You must have created a text file containing the domain data. The file
must be in comma-separated value (CSV) format, as in the example
above.
• For details on data types that you can use, see Using data types and
constants in Data Federator Designer on page 604.

If you have a lot of domain data, you can enter it into your domain table
quickly by importing the data from a text file.

For example, Data Federator can import domain data such as the following.

file: my-domain-data.csv
"1";"single"
"2";"married"
"3";"divorced"
"4";"widowed"

1. Add a domain table.

Data Federator User Guide 59


3 Creating target tables
Managing domain tables

2. Add a datasource that points to the file from which you want to import.
3. When the Domain tables > your-domain-tablewindow appears, click
Add, then click Add from datasource table.
The Domain tables > your-domain-table > Add rows from a
datasource window appears.

4. Refer to the Select a datasource table field and select the datasource
table to be added to the domain table.
The columns of the selected datasource table are displayed in the Select
a subset of columns field on the right. You can, if required, select one
or all of the columns in this field and click View Data to display the
contents of the selected columns.

5. Refer to the Domain columns mapping pane and map the required
datasource column from each domain table column's drop-down list-box.
6. Click Save.
The Domain tables > your-domain-table-name > Update report
window is displayed and your file's imported data is added to your domain
table.

Related Topics
• Creating text file datasources on page 158
• Adding a domain table to enumerate values in a target column on page 55

Dereferencing a domain table from your target table


1. Edit your target table.
2. In the Table Schema pane, find a column that references your domain
table and select String under the Type column.
Do this for each column that is of type Enumerated, and that references
your domain table.

3. Click Save.

Related Topics
• Adding a target table manually on page 46

60 Data Federator User Guide


Creating target tables
Using domain tables in your target tables 3

Exporting a domain table as CSV

• You must have added a domain table.

See Managing domain tables on page 54.

1. In the tree list, click Domain tables.


The Domain tables window appears.
2. Select the table you want to export as CSV.
The Domain tables > your-domain-table-name window appears.
3. Click Export.
The File download window appears giving you the option of opening or
saving your Domain_your-domain-table-name.csv file.
4. Click Save and save the .csv file to a location of your choosing.

Deleting a domain table

To delete a domain table, you must first remove references to it from any
lookup and target tables.
1. In the tree list, click Domain tables.
2. Select the tables that you want to delete.
3. Click Delete, and click OK to confirm.

Related Topics
• Dereferencing a domain table from your target table on page 60
• Dereferencing a domain table from a lookup table on page 251

Using domain tables in your target tables


This section describes how to use domain tables. You can use domain tables
to enumerate the values of a column in your target table.

Data Federator User Guide 61


3 Creating target tables
Using domain tables in your target tables

Using a domain table as the domain of a column

• You must have created a domain table as described in Managing domain


tables on page 54.

This procedure shows how to use the values that you entered in a domain
table as the values that can appear in a column of your target table.
1. Add a target table. See Managing target tables on page 46.
2. When the Target tables > New target table window appears, click Add
columns, then, from the list, click 1.
An empty row appears in the Target schema pane.

3. In the Column name box, type a name for your column.


4. In the Type list, click Enumerated.
An edit icon appears beside the Type box.

5. Click the Edit icon

.
The Target tables > New target table > Domain constraint table 'your-
column-name' window appears.

This window shows a list of your domain tables.

6. In the list, expand the name of your domain table, then click the column
that you want to use as the domain.
For example, if your domain table contains the columns marital_code,
and marital_code_description, click marital_code.

The name of the domain table appears in the Selected table box. The
name of the column appears in the Selected column box.

7. Click Save.
The "Target tables > New target table" window appears.

62 Data Federator User Guide


Creating target tables
Using domain tables in your target tables 3
The name of the domain table and domain column that you selected
appears in the Domain table box in the row that defines your new target
column.

When you choose values for this column in Data Federator Designer,
only the values in domain table will appear.

To associate a set of enumerated values in your datasource to a set of


enumerated values in your target, see The process of adding a lookup
table between columns on page 243.

To constraing rows in your datasource to those whose values match a


set of enumerated values in your target, see Using a domain table to
constrain possible values on page 263.

Data Federator User Guide 63


3 Creating target tables
Using domain tables in your target tables

64 Data Federator User Guide


Defining sources of data

4
4 Defining sources of data
About datasources

About datasources
Data Federator projects use datasources to access a project's sources of
data. A datasource is a pointer that points to and represents the data that is
kept in a source. For example, this could be a relational database in which
you store customer data. A datasource can also point to a text file, for
example in which you keep sales information.

Datasources are a basic component of Data Federator. A datasource consists


of a table, or a set of tables. Once you define a datasource, you can connect
your project to the datasource, and populate your target tables with the data.

Datasources that you can create fall into the following categories:
• Databases are datasources that represent databases such as Oracle,
Access and DB2. Data Federator includes pre-defined resources that you
can use to help configure your datasource to achieve the best
performance.

This category includes relational databases that use JDBC drivers, ODBC
drivers, and openclient drivers.
• Text file datasources provide access to data held in text files, for example
comma-separated value (.csv) files.
• XML/web datasources provide access to data held in XML files, or data
provided by web services.
• A Remote Query Server datasource uses a remote Data Federator Query
Server as a source of data.

Related Topics
• Creating generic JDBC or ODBC datasources on page 138
• Generic and pre-defined datasources on page 71
• Creating remote Query Server datasources on page 202
• Creating text file datasources on page 158
• Adding an XML file datasource on page 179
• Adding a web service datasource on page 183

66 Data Federator User Guide


Defining sources of data
About datasources 4

Datasource user interface

The following diagram shows what you see in Data Federator Designer when
you work with datasources.

The main components of the datasource user interface are:


• (A) the tree view, where you navigate among your datasources
• (B) the main view, where you define your datasources
• (C) collapsed nodes in the tree, each representing one datasource
• (D) an expanded node, showing a datasource with two statuses: a draft
and a final
• (E) a pane, showing parameters for a datasource

Draft and Final datasources

When you create a new datasource, Data Federator marks its status as
Draft, to indicate that the definition is incomplete. In order to use your
datasource in a mapping, when you finalize the definition, you must make it
Final.
• Draft: A datasource is a draft when you first create it. When a datasource
is a draft, you can modify it, but you cannot use it in a mapping.

Data Federator User Guide 67


4 Defining sources of data
About datasources

The datasource appears under Draft in the tree list.

A draft has two statuses: Incomplete and Complete.


• Incomplete: Certain configuration parameters have not been filled in.
The values are either not complete or they are invalid.
• Complete: All necessary configuration parameters are filled in and
are valid.
• The datasource passes automatically from Incomplete to
Complete as soon as you fill in the required configuration
parameters correctly.
• The datasource passes automatically from Complete to
Incomplete if you replace a correct value with an incorrect one.
This is also the case when you add a new table in which the
required parameters are not filled in correctly.

• Final: A datasource is final when you click Make Final.

When a datasource is Final, you cannot modify it, but you can use it in
a mapping.

The datasource appears under Final in the tree list.

68 Data Federator User Guide


Defining sources of data
About datasources 4
Table 4-1: Summary of the life cycle of a datasource

The version... means... you can do this...

Modify the datasource


configuration.
Some datasource defi-
Draft, nition and schema defi- The symbols
Incomplete nition parameters are
invalid.
indicate invalid parame-
ters.

The datasource table Define the datasource


schema is incomplete. table schema.

Datasource definition
Draft, and schema definition Test the datasource
Complete parameters are com- configuration.
plete and valid.

If all the datasource ta-


All the datasource ta-
bles have been added,
ble schemas are de-
make the datasource
fined.
Final.

The datasource ap- If you need to change


pears in the data- a Final datasource,
Final
source tree list under you must copy it to a
Final. Draft first.

Related Topics
• Setting a text file datasource name and description on page 159
• Creating generic JDBC or ODBC datasources on page 138

Data Federator User Guide 69


4 Defining sources of data
About datasources

• Defining the schema of a text file datasource manually on page 172


• Running a query on a datasource on page 211
• Making your datasource final on page 212
• Editing a final datasource on page 213

About configuration resources

The Data Federator software includes pre-defined configuration resources


that you can use to create datasources. For example, the Data Federator
software includes resources for databases including the following:
• Oracle
• MySQL
• SQL Server
• DB2
• Microsoft Access

Using Data Federator Administrator, you can:


• Modify a pre-defined resource to change the configurations of all
datasources that use it
• Copy a pre-defined resource, and use the copy as the base for a new
resource
• Create a new resource

Once you have created a resource in Administrator, you can use it to create
datasources in Designer.

In addition to using pre-defined resources, for a datasource, you can configure


a generic JDBC connection. Unlike resources, this can only be used by the
datasource for which it is created.

There are three types of resources:


• JDBC resources provide access through JDBC. These are used for
databases such as Access, Oracle, and DB2.
• ODBC resources provide access through ODBC. These are used for
databases such as Netezza, Terradata, and Informix.

70 Data Federator User Guide


Defining sources of data
About datasources 4
• Openclient resources are used for databases such as Sybase.

Generic and pre-defined datasources

A generic datasource is a connection configuration that you create in Data


Federator Designer. You do not require Administrator access to define a
generic datasource.

A generic datasource differs from a pre-defined resource in the following


ways:
• Performance: The performance of a generic datasource is not as efficient
as with a pre-defined resource:
• With a generic datasource, a large degree of data processing is
performed by the Data Federator application software.
• For pre-defined resources, as much processing as possible is handled
by the database software. This results in better performance. In
addition, for pre-defined datasources, the connection parameters have
been optimized and tested for maximum performance.

• Datasource availability:
• A generic datasource does not use a configured resource. You have
to re-enter all the configuration parameters every time you create a
new generic JDBC datasource.
• You can use a pre-defined resource configuration for multiple
datasources.

Related Topics
• About datasources on page 66
• Configuration parameters for generic JDBC and ODBC datasources on
page 150
• Creating generic JDBC or ODBC datasources on page 138
• Managing resources using Data Federator Administrator on page 483

Data Federator User Guide 71


4 Defining sources of data
Creating database datasources using resources

Creating database datasources using


resources
To create a database datasource, you can:
• Use a resource definition to set the configuration parameters. A resource
can be used in multiple datasources across multiple projects.

Pre-defined resource definitions are supplied with the Data Federator


sortware, and you can create custom resources to use with Data Federator
Administrator
• Configure a new, generic JDBC or ODBC datasource. Unlike resources,
the configuration can only be used with the datasource for which it is
created.

Related Topics
• Creating JDBC datasources from custom resources on page 133

Adding Access datasources

To create a datasource for Access:


• Ensure that the connector for Access is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for Access. Installing
drivers is the minimal part of configuring connectors. It is also done by
your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

72 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Access, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Access database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Access database. If you are
not sure which resource to choose, ask your Data Federator administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for Access datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your Access datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

Data Federator User Guide 73


4 Defining sources of data
Creating database datasources using resources

Connection parameters for Access datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

74 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The ODBC Data Source Name to


ODBC DSN
use.

The password that Data Federator


Password
enters for the username.

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.
Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

Data Federator User Guide 75


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in Access datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding DB2 datasources

To create a datasource for DB2:


• Ensure that the connector for DB2 is configured. Usually, your Data
Federator administrator configures the connectors.

76 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
• Ensure that you have the necessary drivers installed for DB2. Installing
drivers is the minimal part of configuring connectors. It is also done by
your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select DB2, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your DB2 database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your DB2 database. If you are not
sure which resource to choose, ask your Data Federator administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for DB2 datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your DB2 datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157

Data Federator User Guide 77


4 Defining sources of data
Creating database datasources using resources

• Defining a connection with deployment context parameters on page 156


• Testing and finalizing datasources on page 210

Connection parameters for DB2 datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

78 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The name of the database to which


Database name
to connect.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

The name of the host where the


Host name
database is located.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The password that Data Federator


Password
enters for the username.

Port The port to which to connect.

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

Data Federator User Guide 79


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.
Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

The username that Data Federator


User Name
uses to connect to the source of data

80 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in DB2 datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Database name
• Host name
• Password
• Port
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding Informix datasources

To create a datasource for Informix:


• Ensure that the connector for Informix is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for Informix. Installing
drivers is the minimal part of configuring connectors. It is also done by
your Data Federator administrator.

Data Federator User Guide 81


4 Defining sources of data
Creating database datasources using resources

• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Informix, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Informix database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Informix database. If you are
not sure which resource to choose, ask your Data Federator administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for Informix datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your Informix datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

82 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters for Informix datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

Data Federator User Guide 83


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The middleware type, for example


JDBC or ODBC.
Note:
Network layer
Data Federator inserts this value
when you select the Defined Re-
source, and it cannot be changed.

The ODBC Data Source Name to


ODBC DSN
use.

The password that Data Federator


Password
enters for the username.

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

84 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in Informix datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}

Data Federator User Guide 85


4 Defining sources of data
Creating database datasources using resources

where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding MySQL datasources

To create a datasource for MySQL:


• Ensure that the connector for MySQL is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for MySQL. Installing
drivers is the minimal part of configuring connectors. It is also done by
your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select MySQL, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your MySQL database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your MySQL database. If you are
not sure which resource to choose, ask your Data Federator administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for MySQL datasources for details.

86 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your MySQL datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

Data Federator User Guide 87


4 Defining sources of data
Creating database datasources using resources

Connection parameters for MySQL datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The name of the database to which


Database name
to connect.

88 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

The name of the host where the


Host name
database is located.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The password that Data Federator


Password
enters for the username.

Port The port to which to connect.

Specifies if Data Federator should


add the name of the database in its
SQL queries to this JDBC source of
Prefix table names with the database data.
name
You can select this option only if you
are using a JDBC data source that
can use the database name in
queries.

Data Federator User Guide 89


4 Defining sources of data
Creating database datasources using resources

Parameter Description

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.
Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in MySQL datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Database name
• Host name

90 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
• Password
• Port
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding Oracle datasources

To create a datasource for Oracle:


• Ensure that the connector for Oracle is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for Oracle. Installing
drivers is the minimal part of configuring connectors. It is also done by
your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Oracle, and click Save.
The "Draft" configuration screen is displayed.

Data Federator User Guide 91


4 Defining sources of data
Creating database datasources using resources

4. In the Connection parameters pane, from the Defined resource


drop-down list, select the name of the resource that defines the parameters
for your Oracle database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Oracle database. If you are
not sure which resource to choose, ask your Data Federator administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for Oracle datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your Oracle datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

92 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters for Oracle datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

Data Federator User Guide 93


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The name of the host where the


Host name
database is located.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The password that Data Federator


Password
enters for the username.

Port The port to which to connect.

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

94 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

The system identifier for the Oracle


SID
database.

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.
Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

Data Federator User Guide 95


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in Oracle datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Host name
• Password
• Port
• Schema
• SID
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding Netezza datasources

To create a datasource for Netezza:

96 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
• Ensure that the connector for Netezza is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for Netezza. Installing
drivers is the minimal part of configuring connectors. It is also done by
your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Netezza, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Netezza database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Netezza database. If you are
not sure which resource to choose, ask your Data Federator administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for Netezza datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your Netezza datasource is added.

Data Federator User Guide 97


4 Defining sources of data
Creating database datasources using resources

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

98 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters for Netezza datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

Data Federator User Guide 99


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The middleware type, for example


JDBC or ODBC.
Note:
Network layer
Data Federator inserts this value
when you select the Defined Re-
source, and it cannot be changed.

The ODBC Data Source Name to


ODBC DSN
use.

The password that Data Federator


Password
enters for the username.

Port The port to which to connect.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

100 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters in Netezza datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding Progress datasources

To create a datasource for Progress:


• Ensure that the connector for Progress is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for Progress.
Installing drivers is the minimal part of configuring connectors. It is also
done by your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.
1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

Data Federator User Guide 101


4 Defining sources of data
Creating database datasources using resources

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Progress, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Progress database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Progress database. If you
are not sure which resource to choose, ask your Data Federator
administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for Progress datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your Progress datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

102 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters for Progress datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

Data Federator User Guide 103


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The name of the host where the


Host name
database is located.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The ODBC Data Source Name to


ODBC DSN
use.

Port The port to which to connect.

Specifies if Data Federator should


add the name of the database in its
SQL queries to this JDBC source of
Prefix table names with the database data.
name
You can select this option only if you
are using a JDBC data source that
can use the database name in
queries.

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

104 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The authentication password for the


Progress DB password
Progress database.

The schema for the Progress


Progress DB schema
database.

The database username for the


Progress DB username
Progress database.

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

The name that your Data Federator


administrator defined as the data
source name in the administration
SequeLink data source name interface of SequeLink Server: your-
sequelink-data-source-name,
while configuring the connector to the
database.

The name of the host where your


Data Federator administrator installed
SequeLink server host name
the SequeLink Server, while configur-
ing the connector to the database.

Data Federator User Guide 105


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The port of the host where your Data


Federator administrator installed the
SequeLink server port
SequeLink Server, while configuring
the connector to the database.

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.
Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

106 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters in Progress datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Progress DB password
• Progress DB schema
• Progress DB username
• SequeLink data source name
• SequeLink server host name
• SequeLink server port

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding SAS datasources

To create a datasource for SAS:


• Ensure that the connector for SAS is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for SAS. Installing
drivers is the minimal part of configuring connectors. It is also done by
your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

Data Federator User Guide 107


4 Defining sources of data
Creating database datasources using resources

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select SAS, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your SAS database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your SAS database. If you are not
sure which resource to choose, ask your Data Federator administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for SAS datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your SAS datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

108 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters for SAS datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

Data Federator User Guide 109


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The password that Data Federator


Password
enters for the username.

Port The port to which to connect.

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

For SAS databases the host name


SAS/SHARE server host name of the server where the SAS/SHARE
is running.

110 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.
Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

Data Federator User Guide 111


4 Defining sources of data
Creating database datasources using resources

Parameter Description

Select this check box to access mul-


tiple data sets that are not pre-de-
fined to the SAS/SHARE server.
Use data sets that are not pre-de- These are data sets that are not in-
fined to the SAS/SHARE server cluded in the current SAS configura-
tion. See the documentation on using
datasets that are not pre-defined to
the SAS/SHARE server for details.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in SAS datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Password
• Port
• SAS/SHARE server host name
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

112 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Related Topics
• Defining a connection with deployment context parameters on page 156

Using data sets that are not pre-defined to the SAS/SHARE server

You can configure Data Federator to access multiple data sets that are not
pre-defined to the SAS/SHARE server. These are data sets that are not
included in the current SAS configuration.

To access these data sets, you use the Connection Parameters area of
the configuration screen. To configure a non pre-defined data set, use the
following procedure:
1. In the Connection Parameters area, select the Use data sets that are
non pre-defined to the SAS/SHARE server check box. A set of Location
and Library Name fields appears.
2. In the Location field, enter the path for the dataset, in the format required
for the operating system that you are using.
3. In the Library name field, enter a name to use to refer to the data set,
and select the Prefix table name with schema name checkbox. The
library name that you entered appears as a SCHEMA.
4. Click Add data set to add a new, empty set of Location and Library
name fields, ready to define a further set if you require it.
To delete a defined data set, click the Delete button (shown as a cross
on the user interface) at the right of the data set to delete.
5. Click Save to save the configuration.

Related Topics
• Installing drivers for SAS connections on page 427

Adding SQL Server datasources

To create a datasource for SQL Server:


• Ensure that the connector for SQL Server is configured. Usually, your
Data Federator administrator configures the connectors.

Data Federator User Guide 113


4 Defining sources of data
Creating database datasources using resources

• Ensure that you have the necessary drivers installed for SQL Server.
Installing drivers is the minimal part of configuring connectors. It is also
done by your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select SQL Server, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your SQL Server database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your SQL Server database. If you
are not sure which resource to choose, ask your Data Federator
administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for SQL Server datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your SQL Server datasource is added.

Related Topics
• Managing login domains on page 525

114 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

Connection parameters for SQL Server datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

Data Federator User Guide 115


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The name of the database to which


Database name
to connect.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

The name of the host where the


Host name
database is located.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The password that Data Federator


Password
enters for the username.

Port The port to which to connect.

Specifies if Data Federator should


add the name of the database in its
SQL queries to this JDBC source of
Prefix table names with the database data.
name
You can select this option only if you
are using a JDBC data source that
can use the database name in
queries.

116 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

Data Federator User Guide 117


4 Defining sources of data
Creating database datasources using resources

Parameter Description

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.
Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in SQL Server datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Database name
• Host name

118 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
• Password
• Port
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding Sybase datasources

To create a datasource for Sybase:


• Ensure that the connector for Sybase is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for Sybase. Installing
drivers is the minimal part of configuring connectors. It is also done by
your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Sybase, and click Save.

Data Federator User Guide 119


4 Defining sources of data
Creating database datasources using resources

The "Draft" configuration screen is displayed.


4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Sybase database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Sybase database. If you are
not sure which resource to choose, ask your Data Federator administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for Sybase datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your Sybase datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

120 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters for Sybase datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

the name of the database running on


Default database
the Sybase server

Data Federator User Guide 121


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The middleware type, for example


JDBC or ODBC.
Note:
Network layer
Data Federator inserts this value
when you select the Defined Re-
source, and it cannot be changed.

The password that Data Federator


Password
enters for the username.

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

122 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

The name that your Data Federator


administrator defined in the Server
name field of the Server object in
Sybase Open Client. Ask your Data
Server name
Federator administrator for the value
that he or she chose for the parame-
ter: sybase-server-name, while
configuring the Sybase connector.

For Sybase databases only, a


boolean that determines if the quote
Set quoted identifier
character (") is used to enclose iden-
tifiers.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Data Federator User Guide 123


4 Defining sources of data
Creating database datasources using resources

Connection parameters in Sybase datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Default database
• Password
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding Sybase IQ datasources

To create a datasource for Sybase IQ:


• Ensure that the connector for Sybase IQ is configured. Usually, your Data
Federator administrator configures the connectors.
• Ensure that you have the necessary drivers installed for Sybase IQ.
Installing drivers is the minimal part of configuring connectors. It is also
done by your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.

124 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Sybase IQ, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Sybase IQ database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Sybase IQ database. If you
are not sure which resource to choose, ask your Data Federator
administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for Sybase IQ datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your Sybase IQ datasource is added.

Related Topics
• Managing login domains on page 525
• Adding tables to a relational database datasource on page 157
• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

Data Federator User Guide 125


4 Defining sources of data
Creating database datasources using resources

Connection parameters for Sybase IQ datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

126 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The ODBC Data Source Name to


ODBC DSN
use.

The password that Data Federator


Password
enters for the username.

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

Data Federator User Guide 127


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in Sybase IQ datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding Teradata datasources

To create a datasource for Teradata:


• Ensure that the connector for Teradata is configured. Usually, your Data
Federator administrator configures the connectors.

128 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
• Ensure that you have the necessary drivers installed for Teradata.
Installing drivers is the minimal part of configuring connectors. It is also
done by your Data Federator administrator.
• Ensure that you have the necessary parameters to indicate how to connect
to the database, for example the name of the machine where the database
is running. These are also available from your Data Federator
administrator.

1. Open the project to which you want to add the datasource, and at the top
of the Data Federator Designer screen, click Add, and from the pull-down
list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed
3. From the list, select Teradata, and click Save.
The "Draft" configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
drop-down list, select the name of the resource that defines the parameters
for your Teradata database system.
The resource that you choose depends on the parameters that your Data
Federator administrator configured for your Teradata database. If you
are not sure which resource to choose, ask your Data Federator
administrator.

5. On the "Draft" screen, configure the parameters. Refer to the information


about connection parameters for Teradata datasources for details.
You can use the parameters defined in a deployment context as values
in these fields.

6. Add the datasource tables to your datasource. Refer to the information


on adding tables to database datasources for details.
7. Click Save.
Your Teradata datasource is added.

Related Topics
• Managing login domains on page 525

Data Federator User Guide 129


4 Defining sources of data
Creating database datasources using resources

• Adding tables to a relational database datasource on page 157


• Defining a connection with deployment context parameters on page 156
• Testing and finalizing datasources on page 210

Connection parameters for Teradata datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

130 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The middleware type, for example


JDBC or ODBC.
Note:
Network layer
Data Federator inserts this value
when you select the Defined Re-
source, and it cannot be changed.

The ODBC Data Source Name to


ODBC DSN
use.

The password that Data Federator


Password
enters for the username.

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

Data Federator User Guide 131


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in Teradata datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}

132 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Creating JDBC datasources from custom resources

To create a datasource from a custom resource:


• Ensure that you or the Data Federator administrator has created the
custom resource that corresponds to your database.
• Ensure that you have the necessary driver software for your database
installed, for example JDBC drivers.
• Ensure that you have the necessary driver connection parameters, to
hand. These are normally available from the driver supplier.
• Ensure that you have the necessary database access and authentication
details to hand.

1. Access the project to which you want to add the datasource, and at the
top of the Data Federator Designer screen, click Add, and from the
pull-down list, click Add datasource.
The "New Datasource" screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
3. From the list, select the JDBC from defined resource, and click Save.
The Draft configuration screen is displayed.
4. In the Connection parameters pane, from the Defined resource
pull-down list, select the custom resource that you want to use.
5. On the "Draft" screen, configure the parameters. Refer to the connection
parameters and descriptions information for details.
6. Add the datasource tables to your datasource. Refer to the information
on adding tables to a database datasource for details.
7. Click Save.
Your JDBC datasource is added.

Data Federator User Guide 133


4 Defining sources of data
Creating database datasources using resources

Related Topics
• Connection parameters for JDBC from custom resource datasources on
page 135
• Adding tables to a relational database datasource on page 157
• Testing and finalizing datasources on page 210

134 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4

Connection parameters for JDBC from custom resource


datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

Data Federator User Guide 135


4 Defining sources of data
Creating database datasources using resources

Parameter Description

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

The database connection information


in the form of a JDBC connection
URL.

Example
JDBC connection URL For example: jdbc:mysql:local
host:3306/database-name

For other types of JDBC drivers, see


the description of the property urlTem
plate.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The password that Data Federator


Password
enters for the username.

Specifies if Data Federator should


add the name of the database in its
SQL queries to this JDBC source of
Prefix table names with the database data.
name
You can select this option only if you
are using a JDBC data source that
can use the database name in
queries.

136 Data Federator User Guide


Defining sources of data
Creating database datasources using resources 4
Parameter Description

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

The database supports metadata


Supports catalog
catalog functionality.

Select this check box if the database


Supports schema supports schema catalog functionali-
ty.

Data Federator User Guide 137


4 Defining sources of data
Creating generic database datasources

Parameter Description

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.
Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Creating generic database datasources

Creating generic JDBC or ODBC datasources

In order to create a generic JDBC or a generic ODBC datasource:

138 Data Federator User Guide


Defining sources of data
Creating generic database datasources 4
• Ensure that you have the necessary driver software for your database
installed, for example JDBC or ODBC drivers.
• Ensure that you have the necessary driver connection parameters to
hand. These are normally available from the driver supplier.
• Ensure that you have the necessary database access and authentication
details to hand.

1. Access the project to which you want to add the JDBC or ODBC
datasource, and at the top of the Data Federator Designer screen, click
Add.
The New Datasource screen is displayed.

2. Enter a name and description for your datasource, and expand the
Datasource Type pull-down list.
The datasource options are displayed.
3. From the pull-down list, select either Generic JDBC datasource, or
Generic ODBC datasource, and click Save.
The Draft configuration screen is displayed.
4. In the Connection Parameters area, enter the connection details as
required.
5. When you have entered the connection parameters, click the Test button
to check that they are correct.
If the details are incorrect, a dialog box is displayed with the details. Use
this information to fix the problem.
6. In the Configuration Parameters area, enter the configuration details
as required.
7. In the Optimization Parameters area, select the optimization details as
required.
8. Once the connection is working, click Save.
Your datasource is added, and you can add datasource tables to it.

Related Topics
• Connection parameters for generic JDBC datasources on page 141
• Connection parameters for generic ODBC datasources on page 146
• Configuration parameters for generic JDBC and ODBC datasources on
page 150

Data Federator User Guide 139


4 Defining sources of data
Creating generic database datasources

• Defining a connection with deployment context parameters on page 156


• Optimization parameters for generic JDBC and ODBC datasources on
page 153
• Adding tables to a relational database datasource on page 157
• Testing and finalizing datasources on page 210

140 Data Federator User Guide


Defining sources of data
Creating generic database datasources 4

Connection parameters for generic JDBC datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

Data Federator User Guide 141


4 Defining sources of data
Creating generic database datasources

Parameter Description

The directory and filename of the


JDBC driver that Data Federator uses
to connect to your JDBC datasource.

Enter the path and filename of the


driver file that is delivered with the
database management system that
you want to use as a source.

Example
Driver location
For example, to set the location of
the JDBC driver for a MySQL
database, if you put the file in the
Data Federator installation directory,
type:

C:\Program Files\BusinessOb
jects Data Federator XI
3.0\LeSelect\drivers\mysql-
connector-java-3.1.12-bin.jar

The properties of the JDBC driver the


properties that are available depend
on the database management system
to which you want Data Federator to
connect. See the documentation for
your database management system
Driver properties
for a description of the available
properties.

In this field, enter properties in the


form:
property-name=value[;property-
name=value]*

142 Data Federator User Guide


Defining sources of data
Creating generic database datasources 4
Parameter Description

The database connection information


in the form of a JDBC connection
URL.

Example
JDBC connection URL For example: jdbc:mysql:local
host:3306/database-name

For other types of JDBC drivers, see


the description of the property urlTem
plate.

The filename of the JDBC driver to


JDBC driver
load.

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The password that Data Federator


Password
enters for the username.

Specifies if Data Federator should


add the name of the database in its
SQL queries to this JDBC source of
Prefix table names with the database data.
name
You can select this option only if you
are using a JDBC data source that
can use the database name in
queries.

Data Federator User Guide 143


4 Defining sources of data
Creating generic database datasources

Parameter Description

Specifies if Data Federator should


add the name of the schema in its
SQL queries to this JDBC data
Prefix table names with the schema source.
name
You can select this option only if you
are using a JDBC data source that
can use the schema name in queries,
such as Oracle.

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

The session properties the Data


Federator attempts to set on the
source database management sys-
tem.
Session properties
In this field, enter properties in the
form:
property-name=value[;property-
name=value]*

The database supports metadata


Supports catalog
catalog functionality.

144 Data Federator User Guide


Defining sources of data
Creating generic database datasources 4
Parameter Description

Select this check box if the database


Supports schema supports schema catalog functionali-
ty.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Data Federator User Guide 145


4 Defining sources of data
Creating generic database datasources

Connection parameters for generic ODBC datasources

Parameter Description

The method to use to authenticate


users' login credentials:
• Use a specific database logon
for all Data Federator users

Data Federator connects to the


database using the username and
password that you enter. For each
user, Data Federator uses the
same username and password.
• Use the Data Federator logon

Data Federator connects to the


datasource using the username
and password used to log in to
Authentication mode
Data Federator.
• Use a Data Federator login do-
main

Data Federator connects to the


datasource by mapping Data
Federator users to database
users.

Data Federator uses potentially


different usernames and pass-
words for all Data Federator
users, depending on how you or
your administrator have set up the
login domains.

The Data Federator resource that


Defined resource holds the configuration information
that you want to use.

146 Data Federator User Guide


Defining sources of data
Creating generic database datasources 4
Parameter Description

The name that your Data Federator


installation uses to refer to a
database server or set of servers on
Login domain
which you can log in. Your Data
Federator administrator chose this
name when adding login domains.

The ODBC Data Source Name to


ODBC DSN
use.

ODBC version The ODBC version.

The password that Data Federator


Password
enters for the username.

Specifies if Data Federator should


add the name of the database in its
SQL queries to this JDBC source of
Prefix table names with the database data.
name
You can select this option only if you
are using a JDBC data source that
can use the database name in
queries.

Data Federator User Guide 147


4 Defining sources of data
Creating generic database datasources

Parameter Description

The names of the schemas of tables


that you want to use, separated by
commas.

The % character (percent) means "all


schemas".
Schema
If you use multiple schemas, you
should use the option Prefix table
names with the schema name to
distinguish tables from different
schemas.

The database supports metadata


Supports catalog
catalog functionality.

Select this check box if the database


Supports schema supports schema catalog functionali-
ty.

The username that Data Federator


User Name
uses to connect to the source of data

Related Topics
• Managing login domains on page 525
• Mapping user accounts to login domains on page 526

Connection parameters in generic JDBC datasources that can


use deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• JDBC connection URL

148 Data Federator User Guide


Defining sources of data
Creating generic database datasources 4
• Password
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Connection parameters in generic ODBC datasources that can


use deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• ODBC DSN
• Password
• Schema
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Data Federator User Guide 149


4 Defining sources of data
Creating generic database datasources

Configuration parameters for generic JDBC and ODBC


datasources

Use these parameters when configuring a generic JDBC or ODBC


datasource, that is a JDBC or ODBC datasource for which there is no existing
resource file.

Parameter Description

Lets you list mappings between the


database type and the JDBC type.

Write the database type followed by


= (equals), then the JDBC type.
Separate mappings with ";" (semi-
Cast column types colon).

BOOLEAN=BIT;STRING=VARCHAR

This is useful when the default map-


ping done by the driver is incorrect
or incomplete.

specifies if Data Federator ignores


Ignore keys the keys when retrieving a schema
from the JDBC data source

150 Data Federator User Guide


Defining sources of data
Creating generic database datasources 4

Parameter Description

The maximum load authorized for


each connection. This value can be
used to control the maximum number
of cursors open per connection. You
should also set this parameter to 1 if
Maximum load per connection the driver used to connect to the
database does not support connec-
tion sharing among threads.

The default is 0.

0 means no limit.

specifies if Data Federator puts


Quoted names quotes around table names when
connecting to the JDBC data source

specifies if Data Federator will at-


tempt to set the read only option
when connecting to your JDBC data
source
• Select this check box if you are
Set read only sure that your JDBC data source
supports the read only option.
This optimizes performance.
• Clear this check box if your JDBC
data source does not support the
read only option.

the SQL dialect to use. The choices


are:
• SQL 92
SQL dialect
• SQL 99
• ODBC

Data Federator User Guide 151


4 Defining sources of data
Creating generic database datasources

Parameter Description

The list of specific SQL State codes


that can be used to detect a stale
connection when an SQLException
is thrown by the underlying database.
Standard X/OPEN codes for connec-
tion failures (starting with the two-
character class 08) do not need to
be specified here.

SQL states for connection failure An example of a specific code for


Oracle is 61000 (ORA-00028: your
session has been killed). In this
case, you should enter the code
61000.

The default is empty.

Elements are separated by the char-


acter semicolon (;) with no space
between elements.

152 Data Federator User Guide


Defining sources of data
Creating generic database datasources 4

Parameter Description

• TABLE and VIEW

Choose this to see both tables


and views when you click View
tables.
• TABLE

Choose this to see only tables


when you click View tables.

Table types • VIEW

Choose this to see only views


when you click View tables.
• ALL

Choose this to avoid filtering the


objects that you see in the
database. When you click View
tables, you will see all objects.

specifies the norm of the syntax that


SQL string type Data Federator uses to write requests
for your JDBC data source

Related Topics
• About datasources on page 66

Optimization parameters for generic JDBC and ODBC


datasources

You use the "Optimization Parameters" pane to specify the capabilities that
your database management system supports. This reduces the amount of
processing that Data Federator performs, and improves performance.

Data Federator User Guide 153


4 Defining sources of data
Managing database datasources

For the capabilities that you do not select, Data Federator performs the
operation. For example, if you clear the Supports aggregates checkbox,
Data Federator performs the aggregate operation on the data it retrieves
from the database management system.

Note:
To get the best performance, check the capabilities of your database, and
select the matching capability check boxes for as many options as possible.

Managing database datasources

Using deployment contexts

Deployment contexts allow you to easily deploy a project on multiple servers.


Using deployment contexts, you can define multiple sets of datasource
connection parameters to use with a project's deployment. Each deployment
context represents a different server deployment.

For example, you can define a deployment context for a group of datasources
running on a development server, and another deployment context for the
same group of datasources running on a production server.

When you define the connection parameters for a datasource, in place of


the configuration values, you use the corresponding parameter name. At
deployment time, you select a deployment context, and Data Federator
substitutes the appropriate values for the connection.

Within each deployment context that you define for a project, you use an
identical set of deployment parameter names to define the connection
parameters common to each datasource. You then use these names in your
datasource definition rather than the actual values, and at deployment time,
Data Federator substitutes the values corresponding to the deployment type
that you select.

The deployment parameters that you can use with a datasource definition
depends on the connection's resource type.

Related Topics
• Defining deployment parameters for a project on page 155

154 Data Federator User Guide


Defining sources of data
Managing database datasources 4
• Defining a connection with deployment context parameters on page 156

Defining deployment parameters for a project

Perform this task to create deployment contexts so that you can deploy the
project on multiple servers. Typically, you would create a deployment context
for each server on which the project is to be deployed.
1. Open a project and select it, and in the Tree list, select Configuration to
display the Configuration screen.
2. On the "Configuration" screen, click Deployment contexts to expand
the Deployment Contexts pane.
3. Click the Add new context link to display the Add a new context screen.
4. In the General pane, enter a name for your context. If this context is to
be the default, select the is default check box. The parameters that you
define are then used when you deploy a project and do not select a
deployment context.
5. On the Add a new deployment context screen, click the Add
Parameters button.
6. From the list that appears, select the number of parameters that you want
to add to the deployment context.
A row for each parameter appears, ready for you to supply the parameter
definitions.
7. Define each parameter, by entering a name and a corresponding value
in each of the Name and Value fields.
8. When you have finished defining parameters, click OK to save your
settings.

Example:
To define a variable in the deployment context to define a production server
with a host name of prodserver5 , you would enter the following in the Name
and Value fields:
• Name: ProductionServer
• Value: prodserver5

Data Federator User Guide 155


4 Defining sources of data
Managing database datasources

In your connection definition, you would define the host name as follows
to specify prodserver5 with the deployment context:

${ProductionServer}

Related Topics
• Defining a connection with deployment context parameters on page 156
• Using deployment contexts on page 325

Defining a connection with deployment context


parameters

You must have defined a deployment context with the connection


configurations to use.

Once you have defined a deployment context, you can use the parameters
it contains to configure a connection. The parameters with which you can
use a deployment context depends on the resource type that you are using.
For example, for a Microsoft Access datasource, you can use the following:
• dsn
• user
• password

For a DB2 datasource, you can use the following:


• host
• port
• database
• user
• password
• schema

• To use a deployment for a parameter, in the parameter field, use the


syntax: ${paramname} where paramname is the name of the deployment
parameter as defined in the Deployment parameters definition screen.

156 Data Federator User Guide


Defining sources of data
Managing database datasources 4
Example:
if you have defined a deployment parameter DeployHost to define the host,
then you would use the following syntax in the Connection Definition's Host
name field: ${DeployHost}

Adding tables to a relational database datasource

• You must have added a datasource with a relationald database source


type—that is, either a datasource defined using a resource, or a generic
JDBC or ODBC datasource, for example:
• JDBC from defined resource
• Generic JDBC
• ODBC from defined resource
• Generic ODBC

Once you have defined a relational database datasource, you can add tables
to it.
1. In the tree list, expand your-datasource-name, then click Draft.
The Datasources > your-datasource-name > Draft window appears.

2. In the Datasource tables pane, click Update all tables.


The Datasources > your-datasource-name > Draft > Update tables
window appears.

3. Select the tables that you want to include in your new datasource, and
click Save.
You need to make your datasource final before you can use it.

Related Topics
• Creating generic JDBC or ODBC datasources on page 138
• Updating the tables of a relational database datasource on page 158
• Making your datasource final on page 212

Data Federator User Guide 157


4 Defining sources of data
Creating text file datasources

Updating the tables of a relational database


datasource

Once you have defined a relational database datasource, you can update
the tables in it.
1. In the tree list, expand your-datasource-name, then click Draft.
The Datasources > your-datasource-name > Draft window appears.

2. In the Datasource tables pane, click Update all tables....


The Datasources > your-datasource-name > Draft > Update tables
window appears.

The tables that you previously selected appear as in use.

The tables that you have not selected appear as not in use.

The tables that you previously selected but that have been dropped from
the data access system appear as no longer usable. You must delete
the definitions of these tables from your datasource.

3. Select the tables that you want to include in your new datasource, and
click Save.

Creating text file datasources


Data Federator can use texts files as a datasource. There are many formats
of text files from which Data Federator can extract data. The simplest of
these formats is comma-separated value (CSV).

In general, Data Federator can understand any format in which the data in
the text file is arranged into columns. The columns can be a fixed length or
separated by a specific character. When you add a text file datasource, you
can use the File Extraction Parameters pane to configure these and other
options. Data Federator can then transform your text files into relational,
tabular data.

You can create a datasource from a single text file, or you can create a
datasource from multiple text files.

158 Data Federator User Guide


Defining sources of data
Creating text file datasources 4

About text file formats

Data Federator supports access to two types of text files:


• files with fields of fixed length
• files with field separators (commonly called "CSV files")

Files with separators are generally simpler to generate and are easier to
read. Many software applications can generate these types of files from
internal data.

Note:
Depending on the choice of character (or the character sequence) used as
a separator, you must be certain that the separator is not included in the
value for a field. If the character that you chose as a separator appears in
the value of a field, Data Federator will add two fields instead of one. Files
with fields of fixed length are more restrictive because the size of each field
does not vary.

Setting a text file datasource name and description

You can define a text file to use as a datasource.


1. At the top left of the Designer window, click Add, and from the pull-down
list, select Add Datasource.
The New datasource window is displayed.
2. Click Add datasource, then click Text file.
The Datasources > Draft window appears.

3. Enter a name and description for your datasource name in the Datasource
name and Description boxes, then click Save.
The datasource is added, and it appears in the tree list at the left of the
screen. The datasource Draft screen is displayed. You can now add
tables, then select the source file or files for it.

Related Topics
• Selecting a text data file on page 160

Data Federator User Guide 159


4 Defining sources of data
Creating text file datasources

Selecting a text data file

• You must have defined a text file datasource, and allocated a name to
it.

This procedure describes how to use a single text file as a datasource. To


use multiple text files, see the documentation on selecting multiple text files
as a datasource.
1. In the tree list, expand your-datasource-name, then click Draft.
The Datasources > your-datasource-name > Draft window appears.

2. In the Datasource tables pane, click Add.


The Datasources > your-datasource-name > Draft > New table
window appears.

3. In the Table name box, type a name for your new table .
4. In the General pane, click Browse.
The Browse frame appears.

5. Use the Browse frame to locate and select your source file.
To browse a different drive, enter the drive letter in the Directory box,
and click Browse again.

For example, to locate a text file on the Q: drive, enter "Q:\" in the
Directory box and click Browse.

When you select a file, the file name appears in the File name or pattern
box.

6. Click Save.
Data Federator references the file for use with this datasource table.

Related Topics
• Setting a text file datasource name and description on page 159
• Selecting multiple text files as a datasource on page 173

160 Data Federator User Guide


Defining sources of data
Creating text file datasources 4

Configuring file extraction parameters

• You must have chosen a source file for your datasource.

After you choose your source file, you define how Data Federator parses
the text that the file contains.
1. In the tree list, expand your-datasource-name, then expand Draft,
then click your-datasource-table-namein the tree list.

The Datasources > your-datasource-name > Draft > your-data


source-table-name window appears.

2. In the File extraction parameters pane, complete the text boxes that
describe how Data Federator parses your source file.
• To preview the contents of your file as you configure the file extraction
parameters, click Preview in the General pane.

The Data sheet frame appears, showing the first rows of your source
file.

3. In the Field formatting pane, complete the text boxes that describe the
format of the data in the fields of your source file.
4. Click Save.
Data Federator saves the definition of the format of your source file, which
allows Data Federator to parse the data in the file.

Related Topics
• File extraction parameters on page 162
• Field formatting parameters for text files on page 164
• Selecting a text data file on page 160

Data Federator User Guide 161


4 Defining sources of data
Creating text file datasources

File extraction parameters

Use these parameters to help you when configuring text file extraction
parameters.

Parameter Description

the character set used in your source


File charset
file

• Delimited

Choose this when your source file


has entries separated by a char-
acter.

For example:

MARY;123;SALES
JOHN;456;PURCHASING
File type
• Fixed width

Choose this when your source file


has entries with fixed widths.
For example:

MARYxxx 123xx SALESxxx


JOHNxxx 456xx PURCH.xx

the character that separates fields in


Field separator
your source file

162 Data Federator User Guide


Defining sources of data
Creating text file datasources 4

Parameter Description

specifies if, for text files with fields of


fixed width, Data Federator will con-
sider the newline character (\n) as
the end of a line, if this character oc-
curs in the last field before the normal
number of characters.

For example, for fields of six charac-


ters, the last field is normally padded
to its full width:

JOExxx USAxxx 1980xx Mxxxxx


Ignore line separator
However, the last field may not be
padded if it ends in a newline charac-
ter:

JOExxx USAxxx 1980xx M\n


• Select this check box, for fields of
fixed width, if your data is padded
in the last field.
• Clear this check box if your data
uses the newline character to end
the last field.

the character that surrounds text, for


Text qualifier example " (double quote) or ' (single
quote)

Data Federator User Guide 163


4 Defining sources of data
Creating text file datasources

Parameter Description

the character that allows a text quali-


fier to be treated literally,

For example, if the text qualifier is "


(double quote) and the escape char-
Escape character for text qualifier acter is \ (backslash), then Data
Federator considers the entire line
that follows, as one text:

"roles \"admin\", \"manager\",


\"root\""

Related Topics
• Configuring file extraction parameters on page 161

Field formatting parameters for text files

Use these parameters when configuring the file extraction parameters.

164 Data Federator User Guide


Defining sources of data
Creating text file datasources 4

Parameter Description

specifies the language in which


month names and weekday names
are represented in your source file

For example, if your source file has


dates like:

13 janvier, 2000
Date and time language you should set your Date and time
language to fr (French).

These codes are the lower-case, two-


letter codes as defined by ISO-639.
You can find a full list of these codes
at a number of sites, such as:
http://www.ics.uci.edu/pub/ietf/http/re
lated/iso639.txt

Data Federator User Guide 165


4 Defining sources of data
Creating text file datasources

Parameter Description

specifies the country of the localiza-


tion format used to represent dates
in your source file

For example, if your Date and time


language is English, and your source
file has dates like:

31/12/2002

you should set your Date and time


country to UK.

If your Date and time language is


Date and time country english, and your source file has
dates like:

12/31/2002

you should set your Date and time


country to US.

These codes are the upper-case,


two-letter codes as defined by ISO-
3166. You can find a full list of these
codes at a number of sites, such as:
http://www.chemie.fu-berlin.de/di
verse/doc/ISO_3166.html

166 Data Federator User Guide


Defining sources of data
Creating text file datasources 4

Parameter Description

the character that separates whole


numbers from decimals in your
source file

For example, if your source file has


Decimal separator numbers like:

123.99

you must set the Decimal separator


to "." (period).

the character that separates groups


of thousands in your source file

For example, if your source file has


numbers like:

Thousands grouping separator 26,120,500,000

(for 26 billion, 120 million, 500 thou-


sand)

you must set the Thousands grouping


separator to "," (comma).

specifies the format in which dates,


DATE format, TIMESTAMP format,
times and timestamps are represent-
TIME format
ed in your source file

Related Topics
• Configuring file extraction parameters on page 161
• Date formats used in text files on page 176
• Using data types and constants in Data Federator Designer on page 604

Data Federator User Guide 167


4 Defining sources of data
Creating text file datasources

Connection parameters in text datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Host name
• Password
• Port
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Automatically extracting the schema of your


datasource table

• You must have configured the extraction parameters of your source file.
• The text file that you are configuring must have a header row.

Once you have configured the extraction parameters of your source file, you
must define the schema of your datasource table. Use this procedure to
automatically extract the schema from the first line in your source file.
1. In the tree list, expand your-datasource-name, then expand Draft,
then click your-datasource-table-namein the tree list.

168 Data Federator User Guide


Defining sources of data
Creating text file datasources 4

The Datasources > your-datasource-name > Draft > your-data


source-table-name window appears.

2. Click Table schema.


The Datasources > your-datasource-name > Draft > your-data
source-table-name > Table schema window appears.

3. In the Schema definition pane, select First line after the ignored header
of the file.
To preview the contents of your file as you define the schema, click
Preview in the General pane.

The Data sheet frame appears, showing the first rows of your source file.

4. Click Extract schema.


A confirmation window appears. Click OK to extract the schema and
replace the current schema.

Data Federator extracts the fields in the file, and uses the first line to
create a title for each column.

5. Verify and select the correct types for all the columns.
6. Select the check boxes under the key icon to specify the primary key.
Note:
You can select multiple check boxes to indicate a primary key defined by
multiple columns.

7. Click Save.
The text file is registered as the source of your datasource.

Related Topics
• Configuring file extraction parameters on page 161
• Generating a schema when a text file has no header row on page 171
• Using a schema file to define a text file datasource schema on page 171
• Using data types and constants in Data Federator Designer on page 604
• Making your datasource final on page 212

Data Federator User Guide 169


4 Defining sources of data
Managing text file datasources

Indicating a primary key in a text file datasource

You can indicate the primary key while defining the schema of your
datasource.
1. Open the Table schema window.
2. Define one or more columns as a key in the Table schema pane, by
selecting the checkboxes under the Key icon

Managing text file datasources

Editing the schema of an existing table


1. In the tree list, expand your-datasource-name, then expand Draft, then
click your-datasource-table-name in the tree list.

The Datasources > your-datasource-name > Draft > your-data


source-table-name window appears.

2. In the Table schema pane, click Table schema.


The Datasources > your-datasource-name > Draft > your-data
source-table-name > Table schema window appears, where you can
edit the table schema.

170 Data Federator User Guide


Defining sources of data
Managing text file datasources 4

Using a schema file to define a text file datasource


schema

Data Federator lets you import the schema of a datasource from an external
file. The schema must be in a DDL format.
1. Write your schema file, using one of the formats that Data Federator
recognizes.
2. Open the Table schema window.
3. In the Schema definition pane, from the Schema location text box,
select SQL DLL file or Proprietary DDL file.
4. In the Schema definition pane, choose your source file by clicking
Browse, then Navigate to your file and click Select.
To preview the contents of your source text file as you define the schema,
click Preview in the General pane.

The Data sheet frame appears, showing the first rows of your source file.

5. Click Extract schema.


Data Federator extracts the schema from your DDL file.

Related Topics
• Automatically extracting the schema of your datasource table on page 168
• Formats of files used to define a schema on page 608

Generating a schema when a text file has no header


row

When your source file does not have a first line that defines the names of
the columns, Data Federator can extract the number of columns, which you
can then name manually.
1. Open the Table schema window.
2. In the Schema definition pane, from the Schema location text box,
select Automatic from the structure of the file.

Data Federator User Guide 171


4 Defining sources of data
Managing text file datasources

To preview the contents of your file as you define the schema, click
Preview in the General pane.

The Data sheet frame appears, showing the first rows of your source file.

3. Click Extract schema.


Data Federator extracts the correct number of columns, based on the
character you entered as the column separator.

4. Name the columns manually.


5. Click Save.
The text file is registered as the source of your datasource. You can now
make your datasource final.

Related Topics
• Automatically extracting the schema of your datasource table on page 168
• Defining the schema of a text file datasource manually on page 172
• Making your datasource final on page 212

Defining the schema of a text file datasource


manually

You can define the schema of your datasource manually if the schema
information is not contained in the source file.
1. Open the Table schema window.

To do this... follow this step...

Select a row, then click Add


Add more columns columns, then click the number of
new columns you want to add.

Edit the name of the column in the


Change a column name
Column name column.

172 Data Federator User Guide


Defining sources of data
Managing text file datasources 4
To do this... follow this step...

Edit the type of the column in the


Change a column type
Column type column.

Click the Delete icon

Delete a column

in the row that defines the column


that you want to delete.

In the Schema definition pane,


Extract the entire schema again
click Extract schema.

Note:
The columns must be indicated in the order that the fields appear in the
datasource.
When the source of the datasource is a file containing fixed-length fields,
you must also indicate the number of characters of each field.

2. Click Save to save your changes.

Related Topics
• Using data types and constants in Data Federator Designer on page 604

Selecting multiple text files as a datasource

You can specify multiple files simultaneously, when you are selecting a text
file as a datasource. Note, however, that the files must be of the same table
schema.
1. Specify your multiple source files in the File name or pattern text box in
the File pane of the Datasources > your-datasource-name > Draft
> your-datasource-table-name window.

Data Federator User Guide 173


4 Defining sources of data
Managing text file datasources

The File name or pattern text box indicates the names of the files that
are used to populate the datasource table. You can associate multiple
source files to the same datasource table by separating the names of
each file with a semi-colon ';', and also by using the following symbols:
• Use the symbol "*" to indicate any sequence of characters. For
example "dat*.csv" specifies all the files with names starting with "dat"
and ending with ".csv".
• Use the symbol "?" to indicate any single character. For example
"dat?.csv" specifies the files whose names are composed of the
character string "dat" followed by any character, the followed by the
extension "csv".

2. Click Save
Data Federator references the file for use with this datasource table.

Related Topics
• Selecting a text data file on page 160

Numeric formats used in text files

The following table shows some examples of the numeric formats that Data
Federator reads from text files when you use a text file as a datasource.

If your text file uses and the column type Data Federator inter-
the number... is... prets it as...

+1 234.56 INTEGER 1234

-1 234 euros INTEGER -1234

-1 234 567.89 DECIMAL -1234567.89

+1 234 567.89 euros DECIMAL 1234567.89

174 Data Federator User Guide


Defining sources of data
Managing text file datasources 4
If your text file uses and the column type Data Federator inter-
the number... is... prets it as...

-1 234 567,89 the deci-


mal separator is "," DECIMAL -1234567
(comma)

-1 e+2 euros DECIMAL -100

These examples assume that you have set the decimal separator to "."
(period).

Rules that Data Federator uses to read numbers from text files
• For integers, no error is returned if the string data overflows the size of
an integer (MAX_VALUE = 2147483647 (2^31-1) and MIN_VALUE =
-2147483648 (-2^31)).
• For doubles, no error is returned if the string data overflows the size of a
double (64 BITS). The value is truncated.
• White space is removed.
• Parsing stops at the first non-digit character (except decimal separators,
grouping separators, exponential symbols ("e" and "E") and signs ("+"
and "-")).
• The exponential symbol and the decimal separator can be used only one
time, otherwise parsing stops.
• The exponential symbol must be followed by a digit or a sign, otherwise
parsing stops.
• The symbols "+" and "-" can be used at the beginning of the string data
or after the exponent symbol.
• In Data Federator Designer, if you do not indicate any decimal or grouping
separators, the application used the default separators corresponding to
the language field.

Data Federator User Guide 175


4 Defining sources of data
Managing text file datasources

• No error is returned if you define the same symbol for column separator,
decimal separator and grouping separator (column separator has priority
over decimal separator, which has priority over grouping separator).
• If parsing cannot complete while following these rules, Data Federator
stops parsing and throws an exception.

Date formats used in text files

The following table shows some examples of the date formats that you can
write in the Date format box.

If your text file uses these dates... Enter this format...

2002-06-01, 1999-12-31, or 1970-


yyyy-MM-dd
01-01

10:20 AM January 31st, 1998 or


hh:mm aa MMMM dd, yyyy
8:00 PM March 10th, 1990

1/2/95, 12/15/01 or 4/30/2001 M/D/YY

For details on date formats, see the Java 2 Platform API Reference for the
java.text.SimpleDateFormat class, at the following URL:

http://java.sun.com/j2se/1.4.2/docs/api/java/text/SimpleDateFormat.html.

Note:
For extracting hours, the pattern hh or KK extracts in 12-hour time, while HH
or kk extracts in 24-hour time.
The following table shows the results of different patterns on hour values.

176 Data Federator User Guide


Defining sources of data
Managing text file datasources 4
The hour value... using the pattern... results in the value...

'00' hh 00

'00' HH 00

'12' hh 00

'12' HH 12

'24' hh 00

'24' HH 00

Related Topics
• Field formatting parameters for text files on page 164
• Using data types and constants in Data Federator Designer on page 604

Modifying the data extraction parameters of a text


file

You can modify the data extraction parameters of the draft version of a
datasource.

Changing the data extraction parameters of your datasource has the following
consequences:
• Your table schemas are erased.

1. In the bottom of the tree list, click Datasources.


2. Expand your-datasource-name, then click Draft.

Data Federator User Guide 177


4 Defining sources of data
Managing text file datasources

The Datasources > your-datasource-name > Draft window appears.

3. Modify the data extraction parameters.

Related Topics
• File extraction parameters on page 162

Using a remote text file as a datasource

You require the machine address, user name, password, and port number
for the remote machine.

Data Federator can access text files on a remote machine on a network. The
following options are available:
• Use an SMB share providing that your machine is in the same SMB
domain or workgroup as the distant machine.
• Use an FTP account.

To create a datasource using a text file on a remote machine:


1. In the Configuration pane, set the Source type parameter to Text file.
2. In the Configuration pane, set the Protocol parameter to FTP file system
or SMB share.
3. In the Configuration pane, set the Hostname, Port, Username, and
Password fields to the correct access details for the remote machine.
4. When adding a table to a remote FTP source, in the File name or pattern
fieldconfigure the name of your remote text file
For a distant source over FTP, indicate the absolute path. Note that a
root path for an FTP connection can be different from the root path for a
distant machine. For example, a root path for FTP may be /public/data
while the absolute path on the distant server is /home/user/public/data.

178 Data Federator User Guide


Defining sources of data
Creating XML and web service datasources 4
For a distant source on an SMB network, you must indicate the path from
the shared directory. For example if the shared directory is shareDir and
the files are contained in a sub directory data, then you must indicate the
path as shareDir\data.

The path must start with the name of the shared directory without the
leading backslash "\".

Note:
If connecting to a public SMB directory on UNIX, you must log in with the
username guest.

Creating XML and web service


datasources

About XML file datasources

Data Federator can use an XML file as a datasource. The elements and
attributes in the XML file are mapped to tables and columns, depending on
how you configure the datasource. You can use the Elements and Attributes
pane to configure which tables and columns you want to create from the
elements in the XML file.

Related Topics
• Using the elements and attributes pane on page 189

Adding an XML file datasource

You can add a datasource from the Datasources window.


1. In the tree list, click Datasources
2. Click Add datasource, then click XML data source.
The Datasources > Draft window appears.
3. Complete the datasource name and description in the Datasource name
and Description boxes, then click Save.

Data Federator User Guide 179


4 Defining sources of data
Creating XML and web service datasources

Your datasource is added, and you can choose and configure a source
file for it.

Choosing and configuring a source file of type XML

• You must have added a datasource whose source type is XML file.

1. In the tree list, expand your-datasource-name, then click Draft.


The Datasources > your-datasource-name > Draft window appears.

2. Select your source file.


Note that multiple XML source files must have the same schema.
3. In the Configuration pane, click Browse to select your source file.
The Browse frame appears.
4. Select your source file from the file list.
If you want to browse a different drive, enter the drive letter in the
Directory box, and click Browse in the Browse frame.

For example, enter Q:\ in the Directory box and click Browse.

5. Click Select.
The file name appears in the XML file name box in the Configuration
pane.

6. Select one of the following radio buttons depending on the location of


your XML schema file:
• Inside XML file
• From external XSD (Xml Schema Definition) file
If you are sourcing an external XSD file, click Browse to the right of the
XML schema file name field and navigate to and select it as described
in Step 4, above.
7. Select one of the following radio buttons depending on how your XML
datasource tables should be generated:
• Normalized
• Denormalized

180 Data Federator User Guide


Defining sources of data
Creating XML and web service datasources 4
Normalized here refers to tables where an element's foreign key is only
that of its immediate parent element. Denormalized refers to tables whose
elements may have foreign keys for all their parents.

8. Click Generate Elements and Attributes.


The Elements and Attributes pane is displayed showing a populated
Elements field in List view.

Related Topics
• Selecting multiple XML files as datasources on page 199
• Adding an XML file datasource on page 179
• Using a remote XML file as a datasource on page 200

Connection parameters in XML datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Host name
• Password
• Port
• User Name

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Adding datasource tables to an XML datasource

• You must have chosen and configured a source file of type "XML".

Data Federator User Guide 181


4 Defining sources of data
Creating XML and web service datasources

Once you have defined an XML datasource, you can add tables to it.

You do this by selecting the required Elements and Attributes in the


Elements and Attributes pane, and clicking Create tables.
1. In the tree list, expand your-datasource-name, then click Draft.
The Datasources > your-datasource-name > Draft window appears.
2. In the Elements and Attributes pane, select the Elements and
Attributes that you want to appear in your datasource tables and click
Create tables.
The Tables pane appears showing your selected Elements and
Attributes.

3. Select the datasource tables that you want to include in your new
datasource, and click Save.

Related Topics
• Choosing and configuring a source file of type XML on page 180
• Using the elements and attributes pane on page 189
• Making your datasource final on page 212

About web service datasources

Data Federator can use a web service as a datasource. The elements and
attributes in the response of the web service are mapped to tables and
columns, depending on how you configure the datasource.

Some of the new concepts introduced by Data Federator web services are:
• SQL access to web services
• input columns (to pass values to web service parameters)
• automatic mapping of XML to relational schemas

The responses of web service requests appear in Data Federator in tabular


form. You can use the "Elements and Attributes" pane to map the web service
response to tables and columns.

Related Topics
• Using the elements and attributes pane on page 189

182 Data Federator User Guide


Defining sources of data
Creating XML and web service datasources 4
• Mapping values to input columns on page 233

Adding a web service datasource

You can add a datasource using the Add button beneath a project's tab.
1. Click Add, then select Add datasource.
The New Datasource window appears.

2. Complete the datasource name and description in the Datasource name


and Description boxes.
3. From the Datasource type box, select Web service data source.
4. Click Save.
Your datasource is added, and you can choose and configure a web
service.

Extracting the available operations from a web service

• You must have added a datasource whose source type is web service.

1. Edit your web service datasource.


2. From the WDSL location list, choose HTTP.
3. In the URL box, type the URL of your WSDL file.
For example, you could type http://www.xignite.com/xQuotes.asmx?WSDL
to use the Xignite web service for checking stock quotes.

4. Select one of the following radio buttons depending on how the datasource
tables of your web service should be generated:
• Normalized
• Denormalized
Normalized here refers to tables where an element's foreign key is only
that of its immediate parent element. Denormalized refers to tables whose
elements may have foreign keys for all their parents.

5. Click Generate operations.

Data Federator User Guide 183


4 Defining sources of data
Creating XML and web service datasources

The "Operation selection" pane expands, letting you select the operations
that you want to access from the web service.

Related Topics
• Adding a web service datasource on page 183

Selecting the operations you want to access from a


web service

• You must have added a datasource whose source type is web service.
• You must have extracted the available operations from the web service.

1. Edit your web service datasource.


2. In the "Operation selection" pane, check the boxes beside the operations
you want Data Federator to access.
3. Click Generate operations output schemas,

The "Operations output schemas" pane expands, and you can select
which elements you want Data Federator to convert to tabular form.

Related Topics
• Adding a web service datasource on page 183
• Extracting the available operations from a web service on page 183
• Using the SOAP header to pass parameters to web services on page 186
• Selecting which response elements to convert to tables in a web service
datasource on page 187

Connection parameters in web service datasources that can use


deployment context parameters

For this datasource type, you can use deployment context parameters for
the following field.

184 Data Federator User Guide


Defining sources of data
Creating XML and web service datasources 4
• URL

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Authenticating on a web service datasource

There are two parts to the authentication on web services: on the server that
hosts the web services, and using the SOAP header to pass authentication
parameters to the web service operations.

To authenticate on the server that hosts the web service datasources, Data
Federator lets you use the same fields that you use for authenticating on
any type of datasource. To pass parameters in the SOAP header to the web
service operation, Data Federator provides a field where you can enter header
parameters.

Related Topics
• Authenticating on a server that hosts web services used as datasources
on page 185
• Using the SOAP header to pass parameters to web services on page 186

Authenticating on a server that hosts web services


used as datasources

• You must have added a web service datasource.

To authenticate on the server that is hosting the web services that you are
using as datasources, you use the "Web service authentication" pane to
enter your authentication details.
1. Edit your web service datasource.

Data Federator User Guide 185


4 Defining sources of data
Creating XML and web service datasources

2. In the "Web service authentication" pane, choose the authentication mode


you want to use, and complete the required parameters.
The parameters you must enter depend on your choice of authentication
mode. Data Federator provides different types of authentication modes
that let you choose which credentials you want to send to the datasource.

Related Topics
• Adding a web service datasource on page 183
• Authentication methods for database datasources on page 207

Using the SOAP header to pass parameters to web


services

• You must have extracted the available operations from the web service.

If you are accessing a web service that requires parameters in the SOAP
request, you can add these parameters in the SOAP header.

Parameters in the SOAP header apply to a single operation in the web


service, and they do not change when you make a request. For example,
an operation that retrieves a stock quote may require that you provide a
username and password in the SOAP header. The username and password
would not change each time you made a request. In contrast, you could ask
for a different stock symbol each time, so you would give the stock symbol
in the SOAP body.

In the Data Federator interface, you provide a parameter in a SOAP header


using the "Header parameters" pane. You provide a parameter in the SOAP
body using an input column.
1. Select the operations that you want to access from the web service.
2. In the "Operation selection" pane, click Header parameters.
3. If the web service requires header parameters, such as authentication
tokens or passwords for each operation, use the SOAP header to pass
parameters to the web service.
Depending on the definition of the web service, the content of the header
parameters may differ. Indeed, there may not be header parameters at
all.

186 Data Federator User Guide


Defining sources of data
Creating XML and web service datasources 4
Check the Sensitive box, , if the value of the parameter is a password,
or other sensitive data.

Related Topics
• Extracting the available operations from a web service on page 183
• Selecting the operations you want to access from a web service on
page 184

Selecting which response elements to convert to


tables in a web service datasource

• You must have chosen and configured a source file for your web service
datasource.
• You must have extracted the available operations from the web service.
• You must have selected the operations you want to access from a web
service.

Once you have defined a web service datasource, you can add tables to it.

You do this by selecting the required elements and attributes in the


Operations output schemas pane, then clicking Create tables.
1. Edit your web service datasource.
2. In the Operations selection pane, choose the operations that you want
the web service to return.
3. In the Operations output schemas pane, select the elements and
attributes that you want Data Federator to expose as tables.
a. In the Operations box, select an operation to see its elements and
attributes.
The elements and attributes that the operation returns appear in the
Operations output schemas pane.
b. In the Operations output schemas pane, select the elements and
attributes that you want to appear in your datasource tables.
4. Click Create tables.
The Tables pane appears, showing your selected elements and attributes.

Data Federator User Guide 187


4 Defining sources of data
Creating XML and web service datasources

See the documentation on defining the schema of a datasource for details


on how to read the icons in the "Table schema" pane.

Related Topics
• Selecting the operations you want to access from a web service on
page 184
• Using the elements and attributes pane on page 189
• Making your datasource final on page 212
• Defining the schema of a datasource on page 204

Assigning constant values to parameters of web


service operations

To assign constant values to parameters in web services, use pre-filters on


the input columns that correspond to the parameters.

You can see which input columns correspond to web service parameters
when selecting which response elements to convert to tables, while creating
a datasource.

You create pre-filters when making the mappings from datasources to targets.

Related Topics
• Selecting which response elements to convert to tables in a web service
datasource on page 187
• Assigning constant values to input columns using pre-filters on page 233

Assigning dynamic values to parameters of web


service operations

To assign dynamic values to parameters in web services, use table


relationships on the input columns that correspond to the parameters.

You can see which input columns correspond to web service parameters
when selecting which response elements to convert to tables, while creating
a datasource.

188 Data Federator User Guide


Defining sources of data
Managing XML and web service datasources 4
You create table relationships when making the mappings from datasources
to targets.

Related Topics
• Selecting which response elements to convert to tables in a web service
datasource on page 187
• Assigning dynamic values to input columns using table relationships on
page 234

Propagating values to parameters of web service


operations

To propagate values to parameters in web services, use input value functions


on the input columns that correspond to the parameters.

You can see which input columns correspond to web service parameters
when selecting which response elements to convert to tables, while creating
a datasource.

You create input value functions when making the mappings from datasources
to targets.

Related Topics
• Selecting which response elements to convert to tables in a web service
datasource on page 187
• Propagating values to input columns using input value functions on
page 234

Managing XML and web service


datasources

Using the elements and attributes pane

The Elements and attributes pane of the Datasources > your-data


source-name > Draft window allows you to select which elements and

Data Federator User Guide 189


4 Defining sources of data
Managing XML and web service datasources

attributes from your XML or web service datasource you wish to use when
you generate your datasource tables.

It consists of a List view and an Explorer view, as shown in the image


below, each containing two panels: Elements and Attributes.

The following example of XML code shows several highlighted elements and
attributes:

<cpq_package version="2.0.0.0">
<catalog_entry_path>ftp://ftp.google.com/pub/cp077.exe</cata
log_entry_path>
<filename>cp002877.exe</filename>
<divisions>
<division key="65">
<division_xlate lang="en">Networking</division_xlate>
<division_xlate lang="ja">Networking</division_xlate>
</division>
<division key="6">
<division_xlate lang="en">Server</division_xlate>
<division_xlate lang="ja">Server</division_xlate>
</division>
</divisions>
</cpq_package>

The table below describes how Data Federator maps these XML elements
and attributes to its own elements and attributes, and subsequently generates
and displays them:

190 Data Federator User Guide


Defining sources of data
Managing XML and web service datasources 4
Data
XML Exam- Generates in Data
XML term Federa-
ple Federator
tor term

lang="en" Attribute Attribute Column

Simple element (no child ele-


<filename> Attribute Column
ments)

Complex element (with child


<divisions> Element Table
elements)

Multi-value element (occurs


<division
more than once in its parent el- Element Table
key=
ement)

Elements in List view are displayed in a list. In Explorer view they are
displayed in the Folder > Folder > File format of the Folders Explorer bar
in Windows Explorer. Attributes are always displayed in list form. Both
elements and attributes selected in List view remain selected when Explorer
view is selected, and vice versa.

The Elements panel contains elements which can be 'checked', 'unchecked',


white, light blue, or dark blue. Attributes in the Attributes panel have identical
properties, but cannot be dark blue. The table below describes the meaning
of these properties:

Table 4-25: Meanings of colors and check boxes

If an Element or At-
It means ...
tribute is ...

Checked Its check box has been selected.

Unchecked Its check box has been deselected.

Data Federator User Guide 191


4 Defining sources of data
Managing XML and web service datasources

If an Element or At-
It means ...
tribute is ...

It was the last item on the screen to have been


clicked; it has the focus.
Dark blue
Note: Attributes cannot be clicked, and cannot
therefore be dark blue.

It has been modifed from one state to another. For


example, if an element is already included in a data-
source and is 'checked', it will be white. If it is then
Light blue unchecked, it will appear light blue (and will not be
included in the datasource tables when the list of
datasource tables is updated when Update Tables
is clicked).

White It has been neither modified nor has the focus.

The table below describes how different combinations of these colors and
check boxes they affect elements:

Table 4-26: Combinations of colors and check boxes on elements

If the Ele- Selecting its check box will


It means ...
ment is ... ...

It did not generate a datasource


Cause it to generate a data-
table currently listed in the Ta-
source table including all its
bles pane, nor has it been
Unchecked attributes when the list of
checked to generate one when
and white datasource tables is updated
the list of datasource tables is
when Update Tables is
updated when Update Tables
clicked.
is clicked.

192 Data Federator User Guide


Defining sources of data
Managing XML and web service datasources 4
If the Ele- Selecting its check box will
It means ...
ment is ... ...

It generated a datasource table


Cause it to generate a data-
currently listed in the Tables
source table including all its
pane, but has been unchecked
Unchecked attributes when the list of
so it will not generate one when
and light blue datasource tables is updated
the list of datasource tables is
when Update Tables is
updated when Update Tables
clicked.
is clicked.

It generated a datasource table Cause its generated data-


currently listed in the Tables source table to be deleted from
Checked and pane, and will continue to gen- the Tables pane when the list
white erate one when the list of data- of datasource tables is updat-
source tables is updated when ed when Update Tables is
Update Tables is clicked. clicked.

It did not generate a datasource


Prevent if from generating a
table currently listed in the Ta-
datasource table when the list
Checked and bles pane, but will generate
of datasource tables is updat-
light blue one when the list of datasource
ed when Update Tables is
tables is updated when Update
clicked.
Tables is clicked.

The Find next feature, as shown and highlighted in the image below, allows
you to locate all occurrences of an element, an attribute, or both. It is available
in the Elements and Attributes panels of both the List and Explorer view,
and is especially useful if you have several elements / attributes of the same
name in a large XML or web service datasource.

The Find next feature is described in the table below:

Data Federator User Guide 193


4 Defining sources of data
Managing XML and web service datasources

Use the ... To ...

Select elements, attributes, or elements and at-


Elements and at-
tributes, and display filtered results in the Ele-
tributes drop-down list-
ments and Attributes panels of both the List and
box
Explorer views.

Enter the element / attribute you wish to locate.


Find next field Note: Press Ctrl + Spacebar to activate autocom-
plete and display all previously entered terms.

Locate, and highlight in dark blue, every occur-


rence of the term entered in the Find next field.
Find next link
When used in the Explorer view, it also shows its
location in your XML or web service datasource.

Using the list view of the elements and attributes pane

The image below shows an example of the List view tab located within the
Elements and attributes pane of the Datasources > your-datasource-
name > Draft window:

194 Data Federator User Guide


Defining sources of data
Managing XML and web service datasources 4

The List view tab consists of an Elements panel listing the datasource's
elements, and an Attributes panel listing a highlighed dark blue element's
attributes, if any. The List view tab, in common with the Explorer view tab,
allows you to select or de-select any element or its attributes for inclusion in
your XML datasource tables.

When you select an element in List view, if there is more than one element
of that name, for example 'DatesofAttendance (2), as shown in the image
above, all those elements (and their attributes) will be included in your
datasource table. You can de-select one or every attribute of a selected
element in the Attributes panel.

The List view tab contains the following features:

Use the ... To ...

Elements Panel

Select every instance of every element and


Header Row check box their attributes for inclusion in / removal from
your datasource table.

Data Federator User Guide 195


4 Defining sources of data
Managing XML and web service datasources

Use the ... To ...

Select every instance of the selected element


Lower check boxes and its attributes for inclusion in / removal
from your datasource table.

List the elements in ascending / descending


Elements title
order.

View only elements already View only those elements that appear in your
used for tables check box current, not yet updated, datasource table.

Display the element in dark blue, and display


Individual elements
its attributes in the Attributes panel.

Attributes Panel

Select every attribute of the selected element


Header Row check box for inclusion in / removal from your data-
source table.

Select an individual attribute for inclusion in


Lower check boxes
/ removal from your datasource table.

Using the explorer view of the elements and attributes pane

The image below shows an example of the Explorer view tab located within
the Elements and attributes pane of the Datasources > your-data
source-name > Draft window:

196 Data Federator User Guide


Defining sources of data
Managing XML and web service datasources 4

The Explorer view tab displays your elements in expandable directory form.
It allows you to select one or more of an element for inclusion in your
datasource tables where there exist two or more of the same name. Attributes
are displayed in selectable (non-clickable) list form, as per the List view.

The Explorer view tab contains the following features:

Use the ... To ...

Elements Panel

Display the elements from the root directory.

Note:
Main view link
This link is only enabled if you have clicked on a
'More' icon. See 'More icon', below for details.

Data Federator User Guide 197


4 Defining sources of data
Managing XML and web service datasources

Use the ... To ...

Display:
• all elements
All elements drop-
down list-box • selected elements
• selected elements and their children
• selected elements and their parents

Expand the + signs if they are not already expand-


Expand all link
ed, and display all child elements in the XML file.

Display a Details tab showing only the highlighted


View details link
dark blue element and its direct parent / child path.

Highlight the element in dark blue and display its


Element box
attributes in the Attributes panel.

Select all the selected element's attributes for in-


Check box within an clusion in / removal from your datasource table.
Element box
Note: See 'Lower check boxes', below.

Display only the element to which it is attached as


a 'root' element.

Note:
More icon
The path from the actual root element to the select-
ed element is displayed at the top of the Elements
panel.

Attributes Panel

198 Data Federator User Guide


Defining sources of data
Managing XML and web service datasources 4
Use the ... To ...

Select every attribute of the the selected element


Header Row check box for inclusion in / removal from your datasource ta-
ble.

Arrow in header row Expand or contract the Attributes panel.

Select an individual attribute for inclusion in / re-


Lower check boxes
moval from your datasource table.

Selecting multiple XML files as datasources

You can specify multiple files simultaneously when you are selecting an XML
file as a datasource.

Note:
All your XML files must have the same schema, and your XML schema must
be an external XSD file.

1. Specify the multiple source files in the XML file name text box in the
Configuration pane of the Datasources > your-datasource-name
> Draft window.
The XML file name text box displays the names of the files that are used
to populate the datasource table. You can associate multiple source files
to the same datasource table by separating the names of each file with
a semi-colon ';', and also by using the following symbols:
• Use the symbol "*" (asterisk) to indicate any sequence of characters.
For example "dat*.csv" specifies all the files with names starting with
"dat" and ending with ".csv".
• Use the symbol "?" (question mark) to indicate any single character.
For example "dat?.csv" specifies the files whose names are
composed of the character string "dat" followed by any character,
then followed by the extension "csv".

Data Federator User Guide 199


4 Defining sources of data
Managing XML and web service datasources

2. Select the From external XSD (Xml Schema Definition) file radio button
to the right of XML schema location, click Browse to the right of the
XML schema file name field and navigate to and select your external
XSD file.
Note:
You cannot have an XML schema location inside an XML file if you are
using multiple XML files as datasources.

3. Select one of the following radio buttons depending on how your XML
datasource tables should be generated:
• Normalized
• Denormalized
Normalized here refers to tables where an element's foreign key is only
that of its immediate parent element. Denormalized refers to tables whose
elements may have foreign keys for all their parents.

4. Click Generate Elements and Attributes.


The Elements and attributes pane is displayed showing a populated
Elements field in List view.

Related Topics
• Choosing and configuring a source file of type XML on page 180
• Choosing and configuring a source file of type XML on page 180

Using a remote XML file as a datasource

You require the http details, or the machine address, user name, password,
and port number for the remote machine.

Data Federator can access XML files on a remote machine on a network.


Data Federator can access files either using an SMB share, if your machine
is in the same SMB domain or workgroup as the distant machine. Otherwise,
Data Federator can use an FTP account, or HTTP access.
1. In the Configuration pane, set the Source type parameter to XML file.
2. In the Configuration pane, set the Protocol parameter to FTP file system,
SMB share, or HTTP.
3. In the Configuration pane, set the Hostname, Port, Username, and
Password fields to the correct access details for the remote machine.

200 Data Federator User Guide


Defining sources of data
Managing XML and web service datasources 4
4. When adding a table to a remote FTP source, in the File name or pattern
field, configure the name of your remote XML file
• For a remote source over FTP, enter the absolute path. Note that a
root path for an FTP connection can be different from the root path on
the remote machine. For example, a root path for FTP may be /pub
lic/data while the absolute path on the remote server is /home/us
er/public/data.

• For a remote source on an SMB network, you must indicate the path
from the shared directory. For example if the shared directory is
shareDir and the files are contained in a sub directory data, then you
must indicate the path as shareDir\data.

The path must start with the name of the shared directory without the
leading backslash "\".

Note:
If connecting to a public SMB directory on UNIX, you must log in with
the username guest.

• For a remote source on an HTTP network, you must indicate the path
of the URL from the end of the IP address or server name. For
example: /Folder1/Folder2/File.xml

Testing web service datasources

• You must have added a datasource whose source type is web service.
• You must have added tables to your web service datasource.

1. In the tree list, expand Datasources, then your-datasource-name,


then Draft, then click your-datasource-table-name.
2. Click Query tool.
3. For any column in your datasource that is an "input column", assign the
column a constant value in the Filter field of the Query tool panel.
For example, if the Symbol column is an "input column", enter a formula
like the following in the Filter field:

Symbol='SAP'

4. Click View data.

Data Federator User Guide 201


4 Defining sources of data
Creating remote Query Server datasources

Data Federator contacts the web service, using any values that you assigned
in the Filter field as the values of the input parameters. When the web service
responds, Data Federator displays the response as a table in the Query
tool.

If Data Federator displays an error in contacting the web service, you can
ask your administrator to reconfigure the connector to web services.

Related Topics
• Adding a web service datasource on page 183
• Selecting which response elements to convert to tables in a web service
datasource on page 187
• Configuring connectors to web services on page 479

Creating remote Query Server datasources


You can use a remote Data Federator Query Server as a datasource. You
use the tables that Query Server returns as your source.

Configuring a remote Query Server datasource

In order to use a remote installation of Data Federator Query Server as a


datasource, you require the following details from the remote installation of
Query Server that you want to use:
• hostname and port
• login username and password details
• the catalog and schema details to use

Note:
The remote installation of Query Server must be operating in order that you
can use it for a datasource.

To add a remote Data Federator Query Server as a datasource:


1. At the top of the window, click Add, and from the pull-down list, select
Add datasource.
The New Datasource window is displayed.

202 Data Federator User Guide


Defining sources of data
Creating remote Query Server datasources 4
2. Enter a name for your datasource, and in the Datasource type pull-down
list, select Remote Query Server.
3. Click Save.
The Draft details window is displayed.
4. Enter the remote Query Server details, and then click the Test the
Connection button.
A message is displayed, confirming that the test was successful.
5. Click Update all tables to display the available tables.
The tables in the catalog and schema that you selected are listed. The
datasource is now available for use.

Related Topics
• About datasources on page 66
• Managing resources using Data Federator Administrator on page 483

Connection parameters in Remote Query Server datasources


that can use deployment context parameters

For this datasource type, you can use deployment context parameters for
the following fields.
• Host name
• Password
• Port
• User Name
• Remote catalog and schema

To use a deployment context parameter in a datasource definition field, use


the syntax:
${parameter}
where parameter is the deployment context parameter that you want to use.

Related Topics
• Defining a connection with deployment context parameters on page 156

Data Federator User Guide 203


4 Defining sources of data
Managing datasources

Managing datasources
You can perform most simple operations on datasources from the tree list
in Data Federator Designer.
The Datasources node in the tree list displays the "Datasources" window,
where you can browse, edit and delete datasources.

Defining the schema of a datasource

You can define information about the schema of a datasource in its table
schema window.
1. Open the Table schema window.
2. For each column, set the options as follows.

204 Data Federator User Guide


Defining sources of data
Managing datasources 4
Option Description

Key (key icon

Check this check box to specify that


the column is part of the table's key.

Index (index icon Check this check box to specify that


the column is an index.

Index means that the column has


) a large number of distinct values.

Check this check box to specify that


a value must be provided for this
column in order to retrieve the rest
of the row.

You can use the input column to


prevent the rows of the target table
from being retrieved if the values
Input for this column are not provided in
the query.

This prevents operations like "SE


LECT *".

More precisely, it forces the


WHERE clause to provide a value
for the column.

Enter the number of distinct values


that appear in this column.
Distinct
Data Federator Query Server uses
this value to optimize queries.

Data Federator User Guide 205


4 Defining sources of data
Managing datasources

Option Description

Description icon
Click this icon to display a new win-
dow allowing you to enter a descrip-
tion of the column

3. If your datasource is a text file, you can amend it as follows.

To do this... follow this step...

Select a row, then click Add


Add more columns columns, then click the number of
new columns you want to add.

Edit the name of the column in the


Change a column name
Column name column.

Edit the type of the column in the


Change a column type
Column type column.

Click the Delete icon

Delete a column

in the row that defines the column


that you want to delete.

In the Schema definition pane,


Extract the entire schema again
click Extract schema.

206 Data Federator User Guide


Defining sources of data
Managing datasources 4
Note:
The columns must be indicated in the order that the fields appear in the
datasource.
When the source of the datasource is a file containing fixed-length fields,
you must also indicate the number of characters of each field.

4. Click Save.

Related Topics
• Using data types and constants in Data Federator Designer on page 604

Authentication methods for database datasources

When selecting a data access authentication method, the following options


are available:

Authentication method Description

Use a specific database logon


for all Data Federator users
Data Federator connects
to the database using
the username and pass-
word that you enter. For
each user, Data Federa-
tor uses the same user-
name and password.

Data Federator User Guide 207


4 Defining sources of data
Managing datasources

Authentication method Description

Use the Data Federator logon


Data Federator connects
to the datasource using
the username and pass-
word used to log in to
Data Federator.

Use a Data Federator login do-


main
Data Federator connects
to the datasource by
mapping Data Federator
users to database users.

Data Federator uses po-


tentially different user-
names and passwords
for all Data Federator
users, depending on how
you or your administrator
have set up the login do-
mains.

Displaying the impact and lineage of datasource


tables
1. Open your datasource table.
2. Click Impact and Lineage.
Note:
Only datasources that have Final status show impact and lineage.

The Impact and lineage pane for your datasource table expands and
appears.

208 Data Federator User Guide


Defining sources of data
Managing datasources 4
Related Topics
• How to read the Impact and lineage pane in Data Federator Designer on
page 52

Restricting access to columns using input columns

If you are adding a JDBC datasource or a web service datasource that has
a column with required values, you can check the Input box beside the
column to represent this requirement.

Web service datasources often have columns with required values, because
web services require a request parameter, like a stock ticker symbol, in order
to provive a specific response.

JDBC datasources rarely have columns with required values, but if they do
you can use the Input feature to represent this.
1. Edit the schema of your datasource table.
a. In the treelist, expand Datasources.
b. Click your-datasource-table-name.
In the window that appears, you can edit the schema in the Table
Schema pane.

2. Check Input next to the columns that you want to restrict.

Related Topics
• Mapping values to input columns on page 233

Changing the source type of a datasource

You can change the source type of the draft version of a datasource.

Changing the source type of your datasource has the following consequences:
• Your data extraction parameters are erased.
• Your table schemas are erased.

1. In the bottom of the tree list, click Datasources.


2. Expand your-datasource-name, then click Draft.

Data Federator User Guide 209


4 Defining sources of data
Testing and finalizing datasources

The Datasources > your-datasource-name > Draft window appears.

3. Click Change type, then click the type to which you want to change your
datasource.

Deleting a datasource

You can delete a datasource from the Datasources window.


1. Click Datasources.
The Datasources window appears.

2. In the Datasources window, select the check box of the datasource(s)


you want to delete.
3. Click Delete.
Data Federator deletes the datasources you selected.

Testing and finalizing datasources


To test a datasource, you must verify if the information you entered allows
Data Federator to extract all data from source files and to correctly populate
the datasource tables.

You can encounter the following problems:


• Incorrect configuration, for example incorrect values as datasource
definition or extraction parameters, or an incorrect schema definition.
• Your configuration is not consistent with the data in the source file.

You can perform tests by running a query.

Note:
The tests must be done table by table. It is often practical to test the
datasource tables when they are created.

210 Data Federator User Guide


Defining sources of data
Testing and finalizing datasources 4
A datasource is completely tested when all its tables have been tested, and
are all correctly populated.

Related Topics
• Running a query on a datasource on page 211

Running a query on a datasource

You can run a query on a datasource to test that your datasource definition
and schema definition are returning the right values.

Running a query on your datasource is a way to test what values Data


Federator retrieves when it uses your datasource to respond to requests.
1. In the tree list, expand your-datasource-name, then expand Draft, then
click your-datasource-table-name in the tree list.

The Datasources > your-datasource-name > Draft > your-data


source-table-name window appears.

2. In the Query tool pane, select the columns of the datasource table you
wish to test and click View data.
Data Federator extracts data from the file, then displays the data in
columns in the query frame.

3. Verify that the values in your file appear under the correct columns.
If they are not, try adjusting the schema again.

Example: Example tests to perform on your datasource, when its source


is a text file
• Verify that Data Federator extracts the right number of rows.

Run a query, as in Running a query on a datasource on page 211, and


select the Show total number of rows only check box.

Data Federator User Guide 211


4 Defining sources of data
Testing and finalizing datasources

The number of rows will appear above the query results.


• Verify that Data Federator extracts dates in the correct format.

For example, if you enter the value "dd-MM-yyyy" in the Date format
box, and the dates in your text file are "01-02-2000", where "01" means
"January", then Data Federator will extract the wrong date.

Make sure you use the value "MM-dd-yyyy" if the month appears before
the day in your source file.
• Verify that each value lines up in its correct column.

For example, in the following figure, there is a problem with one value
that breaks across two columns.

To fix this, make sure that you choose the field separator correctly when
you configure the extraction parameters.

Related Topics
• Running a query to test your configuration on page 614
• Printing a data sheet on page 617
• File extraction parameters on page 162

Making your datasource final

Once the datasource is final, you can use it in a mapping.

212 Data Federator User Guide


Defining sources of data
Testing and finalizing datasources 4
When you make a datasource final:
• Its previous final is replaced.
• Its draft is erased.

1. In the bottom of the tree list, click Datasources.


2. Click your-datasource-name in the tree list.

The Datasources > your-datasource-name window appears.

3. Click Make final.


Your datasource appears in Datasources > your-datasource-name
> Final.

Note:
If you already have a datasource in final, it is replaced.
Your draft is erased.

Editing a final datasource

To edit a datasource that you have already made final, you must copy it to
a draft.

When you copy a final datasource to a draft:

Data Federator User Guide 213


4 Defining sources of data
Testing and finalizing datasources

• Its previous draft is replaced.


• Its final remains, and you can still use it in a mapping.

1. In the bottom of the tree list, click Datasources.


2. Click your-datasource-name in the tree list.

The Datasources > your-datasource-name window appears.

3. Click Copy to draft.


Your datasource appears in Datasources > your-datasource-name
> Draft.

Note:
The datasource previously in draft is replaced.

214 Data Federator User Guide


Mapping datasources to
targets

5
5 Mapping datasources to targets
Mapping datasources to targets process overview

Mapping datasources to targets process


overview
The following process shows the most simple mapping that you can add.
The process lists the steps in mapping a single datasource table to a single
target table.

• (1) Add a mapping rule for a target table.


• (2) Select a datasource table that maps the key of the target table.
• (3) Write mapping formulas to map the columns of the target table.

Related Topics
• Adding a mapping rule for a target table on page 217
• Selecting a datasource table for the mapping rule on page 218
• Writing mapping formulas on page 219

The user interface for mapping

The following diagram shows what you see in Data Federator Designer when
you work with mappings:

216 Data Federator User Guide


Mapping datasources to targets
Mapping datasources to targets process overview 5

The main components of the user interface for working with mappings are:
• (A) the tree view, where you navigate among your target tables and
mappings
• (B) the main view, where you configure your mappings
• (C) an expanded node, showing a mapping rule for a target table with the
datasource, lookup and domain tables that participate in the mapping
rule
• (D) an expanded pane, showing how you edit mapping formulas

Adding a mapping rule for a target table

• You must have created a target table.

You add a mapping rule to map data from source tables to target tables.
1. In the tree list, click Target tables, then your-target-table-name, then
Mapping rules.
The Mapping rules window appears.

Data Federator User Guide 217


5 Mapping datasources to targets
Mapping datasources to targets process overview

2. Click Add.
The New mapping rule window appears.

3. In the General pane, type the name and description of your mapping rule
and click Save.
The new mapping rule appears in the tree list.

Related Topics
• Managing target tables on page 46

Selecting a datasource table for the mapping rule

• You must have added a mapping rule.


• You must have created a datasource.
• You must have made your datasource final.

You can select multiple datasource tables to use as the source of a mapping
rule. This procedure shows how to select a single datasource table.

A datasource table that has a column that contributes to the key of a target
table is a called a "core table".
1. In the tree list, click Target tables, then your-target-table-name, then
Mapping rules, then your-mapping-rule-name
The your-mapping-rule-name window appears .
2. In the Table relationships and pre-filters pane, click the Add a new
table to the mapping rule icon.
The Add a table to the mapping pop-up window appears.
3. In the tree list of the Add a table to the mapping pop-up window, select
the required datasource.
The name of your selected datasource table appears in the Selected
table field.
4. By selecting the appropriate checkboxes as required, define its alias,
whether it should be a core table and whether it should have distinct rows.
5. Click OK.
The selected datasource table appears in the mapping rule.

218 Data Federator User Guide


Mapping datasources to targets
Mapping datasources to targets process overview 5
Related Topics
• Adding a mapping rule for a target table on page 217
• About datasources on page 66
• Creating generic JDBC or ODBC datasources on page 138
• Making your datasource final on page 212
• Managing datasource, lookup and domain tables in a mapping rule on
page 282

Writing mapping formulas

• You must have added a mapping and selected a datasource table.

You use mapping formulas to define relationships between values in your


datasource tables and values in your target tables.
1. Click Target tables, then your-target-table-name, then Mapping
rules.
The Mapping rules window appears, showing a list of your mapping
rules.

2. Click the Edit this mapping rule icon

beside the mapping rule that you want to open.


The Target tables > your-target-table-name > Mapping rules >
your-mapping-rule-name window appears, where you can modify
your mapping rule.

3. Edit the formula, using Ctrl + Spacebar to display autocomplete, if


required.
Examples of mapping formulas for a target column are:
• S4.ID

• concat(S12.FIRSTNAME, S12.LASTNAME)

4. Click Save.
Data Federator verifies and saves the mapping formula for the column.

Data Federator User Guide 219


5 Mapping datasources to targets
Determining the status of a mapping

Tip:
Displaying the column type when writing mapping formulas
When writing a mapping formula, you will likely need to know the type of the
column that you are mapping. An easy way to display the type is to roll the
mouse over the name of the column.

A tooltip appears, showing the column type.

Related Topics
• Selecting a datasource table for the mapping rule on page 218
• Mapping values using formulas on page 222

Determining the status of a mapping


Data Federator displays the current status of each of your mapping rules.
You can use this status to learn if you have entered all of the information
that Data Federator needs to use the mapping rule.

Each mapping rule goes through the statuses:


• incomplete

(Data Federator does not show this status in the interface. All new
mapping rules are put in this status.)
• completed
• tested

The status is shown in the Target tables > your-target-table-name >


Mapping rules > your-mapping-rule-name window, in the Status pane.

This table shows what to do for each status of the mapping rule life cycle.

220 Data Federator User Guide


Mapping datasources to targets
Determining the status of a mapping 5
The status... means... you can do this...

The mapping rule has Add a datasource table


incomplete
no datasource tables. to the mapping rule.

Some of the columns in Write a mapping formula


the target table are not for all the columns in the
mapped. target table.

Datasource tables are


not linked together by
relationships. Add relationships from
the tables to the core
There are some data- tables.
source tables are not
linked to core tables.

You have defined all of


the formulas and rela-
tionships, but you have
completed Test the mapping rule.
not checked the map-
ping rule against the
constraints.

You have checked the


data of the mapping rule
tested against all of the con-
straints that are marked
as required.

Related Topics
• Managing relationships between datasource tables on page 253
• Writing mapping formulas on page 219
• Adding a mapping rule for a target table on page 217
• Testing mappings on page 280

Data Federator User Guide 221


5 Mapping datasources to targets
Mapping values using formulas

Mapping values using formulas


You use mapping formulas to define relationships between values in your
datasource tables and values in your target tables.
The Data Federator mapping formulas also let you transform values. For
example, you can use formulas to construct new values in your target tables,
combine multiple values, or calculate results.

Mapping formula syntax

Use the following rules when writing a mapping formula:


• Start the formula with an equals sign (=).
• Refer to your datasource tables by their id numbers (Sn).
• Refer to columns in datasource tables by their aliases. The alias is either
an id number or a name (Sn.An or Sn.[column_name]).
• Use the Data Federator functions to construct the column values or
constants.

Example: Basic functions you use in a mapping formula

Table 5-2: Examples of basic functions in a mapping formula

To do this... use the formula...

convert a date value from one format =permute( S1.date_of_birth,


to another 'AyyMMdd', '19yy-MM-dd' )

=concat(concat( S1.lastname,
concatenate text values
', ' ), S2.firstname)

extract a substring =substring(S1.A1, 5, 10)

222 Data Federator User Guide


Mapping datasources to targets
Mapping values using formulas 5
For a full list of functions that you can use, see Function reference on
page 624.

For details about the data types in Data Federator, see Using data types and
constants in Data Federator Designer on page 604.

Filling in mapping formulas automatically

When you have to map a lot of columns, Data Federator can add mapping
formulas automatically.

Data Federator automatically maps columns in the datasource tables whose


names match the columns in the target table.
1. Select a datasource table for a mapping rule (see Selecting a datasource
table for the mapping rule on page 218).
2. In the Mapping formulas pane, click Auto map.
If:
• target column T.A is empty, AND
• Data Federator finds a datasource column S.A where name of S.A
= name of T.A,

OR
• Data Federator finds a datasource column S.A where name of S.A
= name of T.A ignoring all periods, hyphens, and other
non-alphanumeric characters,

then, Data Federator fills in the formula:

Target column Formula

A = S.A

Data Federator User Guide 223


5 Mapping datasources to targets
Mapping values using formulas

Example: An example of mapping formulas that are filled in automatically


In the following figure, Data Federator has found the names of several
datasource columns and mapped them to the target columns. It has not
affected the columns that were already mapped.

Setting a constant in a column of a target table

• You must have referenced a domain table from your target table.

You may want to set a constant value for two reasons:


• Your target table has a column that does not appear in the datasource
tables, so you must create the value for each row.
• You know that all of the values in a datasource table map to the same
value in a target table.

In this example, the target table has a column called "country" that is not
available in the datasource table. However, all of the rows in the datasource
table are known to have the same value of country.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Either:

224 Data Federator User Guide


Mapping datasources to targets
Mapping values using formulas 5
a. In the Mapping formulas pane, in the target column whose value you
want to map, enter a constant value.
3. Or, if the target column is type "enumerated":
a. In the Mapping formulas pane, beside the target column whose value
you want to map, click the Choose domain table value icon

A frame appears, showing the domain table that you used as the
domain of this column.
b. Click the value that you want to use as a constant.

Data Federator User Guide 225


5 Mapping datasources to targets
Mapping values using formulas

You can click any column, but only the value from the column that you
selected in the schema of the target table will appear in the mapping
formula

The value appears as a formula in the Mapping formulas pane.

4. In the frame showing the domain table, click Close.


5. Click Save to apply your changes.
Data Federator Query Server maps all values of this column that come
from the source of this mapping to a constant.

Related Topics
• Using a domain table as the domain of a column on page 62

Testing mapping formulas

• You must have added a mapping (see Mapping datasources to targets


process overview on page 216).
• You must have added a formula (see Writing mapping formulas on
page 219).

You can run a query on a mapping formula to test that it is correctly mapping
values to the target table.
1. In the tree list, expand your-target-table-name, expand Mapping
rules, then click your-mapping-rule-name.
The Target tables > your-target-table-name > Mapping rules >
your-mapping-rule-name window appears.

2. In the Mapping formulas pane, click the Edit icon

beside a formula.

226 Data Federator User Guide


Mapping datasources to targets
Mapping values using formulas 5

A menu appears.

3. Click Edit.
The Target tables > your-target-table-name > Mapping rules >
your-mapping-rule-name > column-name window appears.

4. In the Formula test tool pane, click View data to see the query results.
For details on running the query, see Running a query to test your
configuration on page 614.

Data Federator displays the data in columns in the Data sheet frame.

5. Verify that the values are correct.


Otherwise, try adjusting the mapping formula again.

Writing aggregate formulas

Data Federator offers a set of standard aggregate functions that you can
use in your formulas.

Aggregate functions perform an operation over a set of data, for example on


all the values in a column.

There is a list of aggregate functions at Aggregate functions on page 624.

Data Federator User Guide 227


5 Mapping datasources to targets
Mapping values using formulas

Nesting aggregate functions

When you need nested aggregate functions in one formula, you must
decompose them into separate terms.

For example, the formula:

SUM(S1.A1 + AVG(S1.A2))

must be written as:

SUM(S1.A1) + AVG(S1.A2)

How aggregate formulas result in groupby

When you use an aggregate function in your mapping rule, the resulting
query will perform a groupby on all columns that are not aggregates.

If you use the following formulas, where none of the columns are marked as
a key:

target.A = source.A
target.B = source.B
target.C = MAX(source.C)

then Data Federator interprets them as the query:

SELECT A, B, MAX(C) FROM T GROUPBY A, B

The aggregate formula is applied by calculating the maximum value of C for


all the groups of rows where A and B are identical.

Example: The effect of using aggregate formulas


For the following data and formulas:

228 Data Federator User Guide


Mapping datasources to targets
Mapping values using formulas 5

S1.Customer S1.Amount

Almore 100

Beamer 100

Beamer 150
Data
Costly 100

Costly 200

Costly 250

T.Customer = S1.Cus- T.Amount = AVG(


Formulas
tomer S1.Amount)

The result is as follows.

T.Customer T.Amount

Almore 100

Beamer 125

Costly 183.333

Data Federator User Guide 229


5 Mapping datasources to targets
Mapping values using formulas

Writing case statement formulas

• You must have added a mapping.

You can use a case statement formula when you want to express the result
as a series of possible cases instead of as a single formula.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Either:
a. In the Formuals field of the Mapping formuals pane, enter the case
statement directly and click Save
3. Or:
a. In the Mapping formulas pane, click the Edit icon

beside the formula.

A menu appears.
b. Click Edit as case statement.
c. Click OK.
The Case statement pane appears
d. Click Add case, then Add new case.

230 Data Federator User Guide


Mapping datasources to targets
Mapping values using formulas 5
To insert a row at a specific position, see the section on inserting rows
in tables.
e. Edit the If and then formulas.
For details on case statement formulas, see the section on the syntax
of case statement formulas.
f. Click Save.

Example: A basic case statement formula

Table 5-6: Example of a case statement formula

Conditions are tested Enter this in the col- Enter this in the col-
in this order... umn If... umn then...

date = per
S6.DAT_ENT LIKE mute(S6.DAT_ENT,
1
'1%' 'AyyMMdd', '19yy-
MM-dd')

date = per
S6.DAT_ENT LIKE mute(S6.DAT_ENT,
2
'2%' 'AyyMMdd', '20yy-
MM-dd')

other cases

other cases (click Add case > Add date = '0001-01-01'


default case to add this
row)

Related Topics
• Mapping datasources to targets process overview on page 216
• Inserting rows in tables on page 618

Data Federator User Guide 231


5 Mapping datasources to targets
Mapping values using formulas

• The syntax of case statement formulas on page 620

Testing case statement formulas

You can test a conditional formula by running a query on it.

Data Federator adapts the default query to your formula.

For example, when you are configuring your query, and you click the Default
button in the Case statement test tool pane, Data Federator will limit the
selected columns to those that are referenced in your formula.
1. In the tree list, expand Target tables, then expand your-target-table-
name, then expand Mapping rules, then click your-mapping-rule-name.

The Target tables > your-target-table-name > Mapping rules >


your-mapping-rule-name window appears.

2. Click the Edit icon

beside the case statement formula that you want to test, then click Edit.
The Target tables > your-target-table-name > Mapping rules >
your-mapping-rule-name > your-column-name window appears.

3. In the Case statement test tool pane, click the Default button.
Data Federator limits the selected columns to those that your formula
uses.

4. Click View data to see the query results.


For details on running the query.

Data Federator displays the data in columns in the Data sheet frame.

5. Verify that the values appear in the target columns appear correctly.
Otherwise, try adjusting the mapping formula.

Related Topics
• Running a query to test your configuration on page 614
• The syntax of case statement formulas on page 620

232 Data Federator User Guide


Mapping datasources to targets
Mapping values to input columns 5
Mapping values to input columns
When your source tables have columns that require you to provide values,
Data Federator Designer guides you to make sure that the values will be
provided.

To take advantage of the ability of Data Federator Designer to guide you,


do the following.
1. Add all of your tables.
2. Add relationships and pre-filters between tables.
3. If Data Federator tells you that a value for an input column is missing:
• Check if you can assign a constant value to the input column, using
a new pre-filter.
• If not possible, see if you can assign a dynamic value that comes from
another table, using a table relationship.
• If not possible, use an input value function to propagate the decision
to a higher target table.

Related Topics
• Assigning constant values to input columns using pre-filters on page 233
• Assigning dynamic values to input columns using table relationships on
page 234
• Propagating values to input columns using input value functions on
page 234
• Setting a constant in a column of a target table on page 224

Assigning constant values to input columns using


pre-filters
1. Add a pre-filter on the input column.
2. Set the pre-filter to a constant value.
a. In the Formula panel, enter the formula your-column-name-or-alias
= constant.

Data Federator User Guide 233


5 Mapping datasources to targets
Mapping values to input columns

Related Topics
• Adding a pre-filter on a column of a datasource table on page 236

Assigning dynamic values to input columns using


table relationships
1. Add a source table to your mapping.
a. In the tree list, click your-target-table-name > Mapping rules >
your-mapping-rule-name
b. In the Table relationships and pre-filters pane, click Add table.
c. In the Add a table to the mapping pane, click the table you want to
add, then click Add.
2. Add a relationship to the input column from the table you added.
a. In the Table relationships and pre-filters pane, click Add
relationship.
b. In the Add relationship pane, in the Columns box, click the input
column in the first table, and another column in the second table.
c. Click Add to formula, then Add.

Propagating values to input columns using input


value functions

When one of the source tables in your mapping rule has a column that
requires a value, and you want to force the query to provide this value, you
can use an input value function.

When you use an input value function in such a way, the user or application
that sends the query is responsible for providing a value in the where clause
of the query. When this value is not provided, Data Federator Query Server
throws an error.
1. Edit the source tables in your mapping rule.
a. In the treelist, click Target tables > your-target-table-name >
Mapping rules > your-mapping-rule-name.
2. Add an input value function on the column.

234 Data Federator User Guide


Mapping datasources to targets
Adding filters to mapping rules 5
a. In the Table relationships and pre-filters pane, click Input value
function....
3. Use the input value function to connect the source column to a column
in your target table.
For example, to propagate the value from the column A1 in table T, type
= T.A1.

Adding filters to mapping rules


Filters let you control data in mapping rules in two ways.
• pre-filters

Pre-filers let you limit the source data that Data Federator queries in a
mapping rule. For example, you can use a filter to limit customer data to
those who are born after a certain date.

You can use a pre-filter on each datasource table that is used in a mapping
rule.
• post-filters

Post-filters let you limit the data after it has been treated by table
relationships.

You can use one post-filter per mapping rule.

The precedence between filters and formulas

Pre-filters are applied before the table relationships.

Post-filters are applied after the table relationships.

Data Federator User Guide 235


5 Mapping datasources to targets
Adding filters to mapping rules

Adding a pre-filter on a column of a datasource table

• You must have added a mapping.

You can add a pre-filter on a column of a datasource table to limit the data
that Data Federator retrieves from the column.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, select the table including
the column whose values you want to filter and click the Edit the selected
table icon.

236 Data Federator User Guide


Mapping datasources to targets
Adding filters to mapping rules 5
The Edit the mapping source -your-datasource-table-name
pop-up window appears:

3. Click Set a pre-filter.


The Edit Pre-Filter pop-up window appears:

Data Federator User Guide 237


5 Mapping datasources to targets
Adding filters to mapping rules

4. Expand the tree list in the Tables and Columns pane and select a column
on which to add a filter formula.
Press Ctrl + Spacebar to activate autocomplete and display all possible
column names, if required.
5. Enter the filter formula in the Formula pane using, if required, the Tables
and Columns, Operator and Functions panes.
An example filter formula is:

S12.DATE_OF_BIRTH > '1970-01-01'

The column name appears in the Formula pane.

238 Data Federator User Guide


Mapping datasources to targets
Adding filters to mapping rules 5
Note:
You can enter multiple filter formulas on different columns.

6. Click OK.
You are returned to the Edit the mapping source -your-datasource-
table-name pop-up window.
7. Click OK.
The Table relationships and pre-filters pane shows a Filter icon

over each table where you added a pre-filter.


• The Valid Filter icon

appears when a pre-filter is correct.


• The Invalid Filter icon

appears when a pre-filter is incorrect.

Related Topics
• Mapping datasources to targets process overview on page 216
• The syntax of filter formulas on page 619

Editing a pre-filter

• You must have added a mapping.

You can edit a pre-filter using the Table relationships and pre-filters pane.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, place your cursor over
each table to display any filter details:

Data Federator User Guide 239


5 Mapping datasources to targets
Adding filters to mapping rules

3. Select the table including the column whose filter you want to edit and
click the Edit the selected table icon.
The Edit the mapping sourceyour-table-name pop-up window
appears with filter details in the Pre-filter pane:

4. Click the Edit Pre-filter button.


The Edit Pre-filter pop-up window appears.
5. Edit the filter formula in the Formula pane, using the Tables and
Columns, Operator and Functions panes, if required.
Note:
When you edit a filter formula, you cannot change the column on which
you filter. To change the column, delete the filter, and add a new filter on
a different column.

6. Click Update.

Related Topics
• Mapping datasources to targets process overview on page 216

240 Data Federator User Guide


Mapping datasources to targets
Adding filters to mapping rules 5

Deleting a pre-filter

You can delete a pre-filter using the Table relationships and pre-filters
pane.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, place your cursor over
each table to display any filter details:

3. Select the table including the column whose filter you want to delete and
click the Edit the selected table icon.
The Edit the mapping sourceyour-table-name pop-up window
appears with filter details in the Pre-filter pane:

4. Click the Delete Pre-filter button.


The Pre-filter pane shows the Set a Pre-filter button.
5. Click OK.
The pre-filter formula is removed and the pre-filter disappears from the
Table relationships and pre-filters pane.

Data Federator User Guide 241


5 Mapping datasources to targets
Using lookup tables

Related Topics
• Mapping datasources to targets process overview on page 216

Using lookup tables


You can use a lookup table to map values from a datasource table to values
in a domain table.

You need a lookup table when the values in the column of a datasource table
must be translated to the values in the column of a target table.

What is a lookup table?

A lookup table associates the values of columns of one table to the values
of columns in another table.
• Lookup tables hold columns of data with mappings to other columns of
data.
• The data in a lookup table is stored on the Data Federator Query Server.
• You can combine a lookup table with a domain table to map the values
in a datasource column to the values in a domain table (see Using lookup
tables on page 242).
• Lookup tables support up to 5000 rows.

Example: A case where you might need a lookup table


The following is an example of a lookup table that you can use to associate
a list in a datasource table to a different list in your target table. In this
procedure, the datasource table uses text codes to represent sex, while
the target table uses integers.

A datasource table has a column "sex" with the values:


• F
• M

Your target table has a column "sex" with the enumerated values:
• 1

242 Data Federator User Guide


Mapping datasources to targets
Using lookup tables 5
• 2

To complete your mapping, you must create a lookup table that maps
• F to 1
• M to 2

The following table lists the types of lookup tables that Data Federator lets
you create.

Type of lookup table How you implement it

You create a lookup table with two


columns.

Datasource table to domain table The first column references a column


in the datasource table; the second
column references a column in the
domain table.

You create a lookup table with two


columns.

Datasource table to datasource table The first column references a column


in a datasource table; the second
column references a column in anoth-
er datasource table.

The process of adding a lookup table between


columns

You can use a lookup table to map values from a datasource table to values
in a domain table.

Data Federator User Guide 243


5 Mapping datasources to targets
Using lookup tables

You need a lookup table when the values in the column of a datasource table
must be translated to the values in the column of a target table.

The following process lists the steps in adding a lookup table to associate
the values of a column in a datasource table to a column in a domain table.

• (1) Add a lookup table (see Adding a lookup table on page 244).
• (2) Reference a datasource table in your lookup (see Referencing a
datasource table in a lookup table on page 246).
• (3) Reference a domain table in your lookup (see Referencing a domain
table in a lookup table on page 247).
• (4) Map the values in the datasource table to the values in the domain
table (see Mapping values between a datasource table and a domain
table on page 248).

Adding a lookup table

This procedure shows how to add a lookup table, which establishes a


correspondence between one set of values and another set of values.
1. In the bottom of the tree list, click Lookup tables.
2. Click Add lookup table, then click Create a lookup table.
The Lookup tables > New lookup table window appears.

3. In the Table name box, type a name for your new lookup table.
4. In the Table schema pane, click Add columns, then click Add
datasource column to add one column from a datasource table.
An empty datasource column appears.

5. In the Table schema pane, click Add columns, then click Add domain
column to add one column from a domain table.
You can add columns repeatedly.

244 Data Federator User Guide


Mapping datasources to targets
Using lookup tables 5
To add a new column at a specific position, see Inserting rows in tables
on page 618.

An empty domain column appears. The domain column is marked by a


domain icon

6. Enter the following values in the table.

In the text box... enter the following...

Datasource column, Column name assoc_sex_char

Datasource column, Column type STRING

Domain column, Column name assoc_sex_num

Domain column, Column type INTEGER

For details about the data types in Data Federator, see Using data types
and constants in Data Federator Designer on page 604.

7. Click Save.
Your new lookup table appears in the tree list.

The Lookup tables > your-lookup-name window appears. In this


window, you can reference a datasource table and a domain table (see
Referencing a datasource table in a lookup table on page 246 and
Referencing a domain table in a lookup table on page 247).

Data Federator User Guide 245


5 Mapping datasources to targets
Using lookup tables

Referencing a datasource table in a lookup table

• You must have added a lookup table.


• You must have added a datasource that contains a datasource table.
• You must have made your datasource final.

When you have created a lookup table, you can reference a datasource
table. The datasource table is the first part of the lookup.
1. In the bottom of the tree list, click Lookup tables.
2. Click your-lookup-table-namein the tree list.

The Lookup tables > your-lookup-table-name window appears.

3. In the Datasource table contributors to lookup values pane, click Add.


The Lookup tables > your-lookup-table-name > Description >
Add a new relationship window appears.

4. In the list, expand the datasource, then click the datasource table whose
column you want to use in the lookup.
The name of the selected datasource appears in the Selected Datasource
box.

The name of the selected datasource table appears in the Selected table
box.

The list of columns you can map appears in the your-lookup-first-


column-name list.

5. From the your-lookup-datasource-column-namelist, select the


name of the column that you want to use in the lookup.
6. Click Save.
The Lookup tables > your-lookup-table-name window appears,
showing the datasource table you selected as part of the lookup.

246 Data Federator User Guide


Mapping datasources to targets
Using lookup tables 5
Related Topics
• Adding a lookup table on page 244
• About datasources on page 66
• Creating generic JDBC or ODBC datasources on page 138
• Making your datasource final on page 212

Referencing a domain table in a lookup table

• You must have added a lookup table (see Adding a lookup table on
page 244).
• You must have added a domain table (see Adding a domain table to
enumerate values in a target column on page 55).

When you have created a lookup table, you can reference a domain table.
The domain table is the second part of the lookup.
1. In the bottom of the tree list, click Lookup tables.
2. Click your-lookup-table-namein the tree list.

The Lookup tables > your-lookup-table-namewindow appears.

3. In the Domain contributors to lookup values pane, click Add.


The Lookup tables > your-lookup-table-name > Description >
Add a new relationship window appears.

4. In the list, click the domain table whose column you want to use in the
lookup.
The name of the selected domain table appears in the Lookup table box.

The list of columns you can map appears in the your-lookup-second-


column-name list.

5. From the your-lookup-domain-column-namelist, select the name


of the column that you want to use in the lookup.

Data Federator User Guide 247


5 Mapping datasources to targets
Using lookup tables

6. Click Save.
The Lookup tables > your-lookup-table-name > Description
window appears, showing the domain table you selected as part of the
lookup.

Mapping values between a datasource table and a


domain table

• You must have referenced a datasource table and a domain table in your
lookup (see Referencing a datasource table in a lookup table on page
246 and Referencing a domain table in a lookup table on page 247).

This section shows how to associate the values in the column of a datasource
table to the values in the column of a lookup table.

In this example, the set of values in the datasource is {F, M}, and the set
of values in the domain table is {1, 2}.
1. In the bottom of the tree list, click Lookup tables.
2. Click your-lookup-table-name in the tree list.

The Lookup tables > your-lookup-table-name window appears.

3. In the Datasource table contributors to lookup values pane, select


your datasource table, and click Update table contents.
The Table contents pane shows the values that Data Federator imported
from your datasource table. If you followed the procedure at Referencing
a datasource table in a lookup table on page 246, then you see the
following values:
• F
• M

4. Beside the row "F", click the Edit icon

248 Data Federator User Guide


Mapping datasources to targets
Using lookup tables 5

.
The Lookup tables > your-lookup-table-name > Modify row window
appears. This frame contains a blank row containing two text boxes. The
first text box is a column from your datasource table. The second text box
is a column from your domain table.

5. Click the Lookup values icon

.
The your-domain-table-name frame appears.

6. In the domain table, click the row (1, female).


The value "1" appears in the lookup table.

7. Click Save.
The Table contents pane shows the row (F, 1) in your table. This means
that the value F in your datasource table is associated to the value 1 in
your domain table.

8. Repeat steps 4 to 7 to associate the value M to the value 2.


Your lookup table is complete, and is ready to use in a mapping.

Adding a lookup table by importing data from a file

If you have a lot of lookup data, you can enter it into your Data Federator
project quickly by importing the data from a text file.

For example, Data Federator can import data such as the following.

file: my-lookup-data.txt
"political_region";"code"
"Alabama";"1"
"Alaska";"2"
"Arizona";"3"
"Arkansas";"4"

Data Federator User Guide 249


5 Mapping datasources to targets
Using lookup tables

"California";"5"
"Colorado";"6"
"Connecticut";"7"
"Delaware";"8"
"Florida";"9"
"Georgia";"10"
"Hawaii";"11"
"Idaho";"12"
"Illinois";"13"
"Indiana";"14"
"Iowa";"15"
"Kansas";"16"
"Kentucky ";"17"
"Louisiana ";"18"
"Maine";"19"
"Maryland";"20"
"Massachusetts";"21"
"Michigan";"22"
"Minnesota";"23"
"Mississippi";"24"
"Missouri";"25"
"Montana";"26"
"Nebraska";"27"
"Nevada";"28"
"New Hampshire";"29"
"New Jersey";"30"
"New Mexico";"31"
"New York";"32"
"North Carolina";"33"
"North Dakota";"34"
"Ohio";"35"
"Oklahoma ";"36"
"Oregon";"37"
"Pennsylvania";"38"
"Rhode Island";"39"
"South Carolina";"40"
"South Dakota";"41"
"Tennessee";"42"
"Texas";"43"
"Utah";"44"
"Vermont";"45"
"Virginia";"46"
"Washington";"47"
"West Virginia";"48"
"Wisconsin";"49"
"Wyoming";"50"

1. Make a new text file to define the lookup table data.


Ensure the new file is in comma-separated value (CSV) format, as in the
example above.

2. Add a lookup table and reference a datasource and domain table in it.

250 Data Federator User Guide


Mapping datasources to targets
Using lookup tables 5
3. Add a second datasource that points to the file from which you want to
import.
4. When the Lookup tables > your-lookup-table-name window
appears, click Add, then click Add from a datasource table.
The Lookup tables > your-lookup-table-name > Add rows from
a datasource window appears.

5. Refer to the Select a datasource table field and select the datasource
table to be added to the lookup table.
The first columns of the selected lookup table, and their types, are
displayed in the lookup table drop-down list-boxes beneath the Lookup
columns mapping pane.

They are also displayed in the Select a subset of columns field on the
right. You can, if required, select one or all of the columns in this field and
click View Data to display the contents of the selected columns.

6. Refer to the Lookup columns mapping pane and map the required
datasource column from each lookup table column's drop-down list-box.
7. Click Save.
The Lookup tables > your-lookup-table-namewindow is displayed
and your file's imported data is added to your lookup table.

Related Topics
• Using data types and constants in Data Federator Designer on page 604
• Adding a lookup table on page 244
• Referencing a datasource table in a lookup table on page 246
• Referencing a domain table in a lookup table on page 247
• Creating text file datasources on page 158

Dereferencing a domain table from a lookup table

In this version, the only way to dereference a domain table from a lookup
table is to delete the lookup table.

Related Topics
• Deleting a lookup table on page 252

Data Federator User Guide 251


5 Mapping datasources to targets
Using a target as a datasource

Deleting a lookup table


1. In the tree list, click Lookup tables.
The Projects > Lookup tables window opens.

2. Select the check box beside the lookup table you want to delete.
3. Click Delete.

Exporting a lookup table as CSV

• You must have added a lookup table.

See Adding a lookup table on page 244.

1. In the tree list, click Lookup tables.


The Lookup tables window appears.
2. Select the table you want to export as CSV.
The Lookup tables > your-lookup-table-name window appears.
3. Click Export
The File download window appears giving you the option of opening or
saving your Lookup_your-lookup-table-name.csv file.
4. Click Save and save the .csv file to a location of your choosing.

Using a target as a datasource


You can use a target as a datasource for another mapping. This lets you
build several levels of mappings.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, click Add table.
The Add a table to the mapping pane appears.
3. Click on a specific table.
The name of the selected table appears in the Selected table field.

252 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5
4. Click Add to add the table to the mapping rule.
The table appears in the tree list under the Target tables >your-target-
table-name >Mapping rules >your-mapping-rule-name >Tables
node.

Managing relationships between


datasource tables
This section shows how to manage the relationships between datasource
tables, which you need to do while mapping columns from multiple datasource
tables.

You need to manage relationship when you have multiple datasource tables
and the data in those tables is related.

Related Topics
• The process of mapping multiple datasource tables to one target table
on page 264

The precedence between formulas and relationships

When you add a relationship between datasources, it is applied after the


mapping formulas.

Data Federator User Guide 253


5 Mapping datasources to targets
Managing relationships between datasource tables

Finding incomplete relationships

To find the datasource tables that have no, or incorrect, relationships to the
core tables, look for red bars in the Table relationships and pre-filters
pane.

254 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5

The image above, however, also shows a solid grey line between the first
two (non-core) tables. This means their relationship is satisfactory. A dotted
red line means the relationship is erroneous.

Meaning of colors in the datasource relationships diagram

The meaning of a red bar in the datasource relationships diagram depends


on whether you have chosen to designate the table a core table or non-core
table. In either case, it indicates a situation that should be corrected.

You can right-click a table to see if it is a core table or not.

The following table shows the meaning of the colors depending on whether
a table is a core table or not.

Data Federator User Guide 255


5 Mapping datasources to targets
Managing relationships between datasource tables

Table 5-9: The meaning of colors in the datasource relationships diagram

Type of table Red bar Blue bar

At least one table has no All of the core tables are re-
core table relationships with the other lated through relationships
core tables. to other core tables.

The table has no relation- The table has a relationship


ship to any core table, nei- to at least one core table, ei-
non-core table
ther directly nor through ther directly or through other
other tables. tables.

Dotted red line Solid gray line

The relationship has an


The relationship is correct.
error.

Add relationships as required until no red bar is displayed.

Related Topics
• Adding a relationship on page 256

Adding a relationship

• You must have added a mapping.

You can add a relationship between datasource tables in the Table


relationships and pre-filters pane.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Find the datasource tables that have no relationships.

256 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5
3. In the Table relationships and pre-filters pane, click Add relationship:

The Add relationship pop-up window appears:

Data Federator User Guide 257


5 Mapping datasources to targets
Managing relationships between datasource tables

4. Select columns from each table and the operator from the drop-down
between Table 1 and Table 2.
An example relationship is:

S1.A2 = S2.A2

The resultant formula is displayed in the Formula pane.


Note:
Alternatively, write and edit the formula directly in the Formula pane.

5. Click OK.
The Table relationships and pre-filters pane shows your relationship.

258 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5
Related Topics
• Mapping datasources to targets process overview on page 216
• Finding incomplete relationships on page 254
• The syntax of relationship formulas on page 621

Editing a relationship

• You must have added a mapping.

You can edit a relationship between datasource tables in the Table


relationships and pre-filters pane.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, click the relationship to
select it.

3. Click the Edit the selected datasource relationship icon.


The Edit relationship pop-up window appears.
4. Select columns from each table and the operator from the drop-down
between Table 1 and Table 2, as required.
The resultant formula is displayed in the Formula box.
Note:
Alternatively, write and edit the formula directly in the Formula box.

5. Click OK.

Data Federator User Guide 259


5 Mapping datasources to targets
Managing relationships between datasource tables

The Table relationships and pre-filters pane shows your relationship.

Related Topics
• Mapping datasources to targets process overview on page 216
• Finding incomplete relationships on page 254
• The syntax of relationship formulas on page 621

Deleting a relationship

You can delete a relationship between datasource tables in the Table


relationships and pre-filters pane.
1. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, click the relationship to
select it.

The selected relationship is shown in bold.

3. Click Remove.
4. Click OK to confirm you want to remove the relationship formula.
The Table relationships and pre-filters pane no longer shows the
relationship.

260 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5

Choosing a core table

The following procedure shows how to designate a table as a core table.


1. Edit the mapping rule.
2. Right-click the table to that you want to choose as a core table.
The table appears selected in the Table relationships and pre-filters
pane, and a context menu appears.
3. On the context menu, select the Core table option.

4. Click Save.

Configuring meanings of table relationships using


core tables

When you map multiple datasources to a target, you must distinguish between
core tables and non-core tables.
• Use a core table to choose the set of rows that will populate your target
table (the result set).

When you set two or more tables as core, the result set is defined by the
join of all the core tables.

Data Federator User Guide 261


5 Mapping datasources to targets
Managing relationships between datasource tables

• Use non-core tables to extend the attributes of each row in the result set.

Example: The effect of setting a table as core or non-core


Suppose that you have two tables: Customers and Orders.

then a join between


If you set the Cus and the Orders table
the two tables re-
tomers table to... to...
turns...

all customers, including


those who did not pur-
core non-core
chase anything (a left
outer join)

only those customers


core core who purchased some-
thing (an inner join)

The following icon, displayed beneath datasource table aliases such as S1,
S2 or S10 in the Table relationships and pre-filters pane, indicates they
are core tables:

The table below describes how you use core tables to configure meanings
of table relationships:

If you have ... And ... Then ensure ...

want to map a column to the source table is a core


One source table
the key of the target table table

the target table has no the source table is a core


One source table
key columns table

want to display all values


only one source table is
Two source tables in all rows, including null
a core table
values

262 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5
If you have ... And ... Then ensure ...

want to display rows that both source tables are


Two source tables
contain null values core tables

you change the non-core


have a non-core table table to a core table, or
Three source tables
between two core tables one of the outer core ta-
bles to a non-core table

The effects on the target table, of designating a source table as a core table
are represented in the following diagram:

Using a domain table to constrain possible values

• You must have created a target table.


• You must have created a domain table.
• You must have added a datasource table.

You can use a domain table to constrain the values that of a target table by
defining a relationship between the domain table and your datasource.
1. Add the datasource table as a source of your mapping.
2. Add the domain table as a source of your mapping.
3. Add a relationship between the key columns of the datasource table
whose values you want to constrain and the domain table.

Data Federator User Guide 263


5 Mapping datasources to targets
Managing relationships between datasource tables

To add this kind of relationship, add a relationship as in Adding a


relationship on page 256, and enter the following formula.

datasource-id.key-column-id = domain-id.key-column-id

For example:

S1.A1 = S2.A1

Only the rows of the datasource whose ID matches one of the IDs in the
domain table appear in the target.

Related Topics
• Managing target tables on page 46
• Managing domain tables on page 54

The process of mapping multiple datasource tables


to one target table

The following process lists the steps in mapping a two datasource tables to
a single target table.
• (1) Add a datasource table (see Adding multiple datasource tables to a
mapping on page 265).
• (2) Write mapping formulas (see Writing mapping formulas when mapping
multiple datasource tables on page 265)
• (3) Add relationships between the datasource tables (see Adding a
relationship when mapping multiple datasource tables on page 267)

Tip:
In what order to proceed when mapping multiple datasource tables
Start by adding the datasource tables that map the key of the target table,
then proceed with the datasource tables that are needed to map the non-key
columns of the target table.

264 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5

Adding multiple datasource tables to a mapping

Adding multiple datasource tables is the same as adding a single datasource


table multiple times.

To add a datasource table to a mapping, see Selecting a datasource table


for the mapping rule on page 218. Repeat this to add all of your datasource
tables.

Writing mapping formulas when mapping multiple


datasource tables

Writing mapping formulas for multiple datasource tables is the same as for
a single datasource table. To write a mapping formula, see Writing mapping
formulas on page 219.

While you are adding mapping formulas:


• When you have a choice between multiple datasource tables to map the
key in the target table, see Interpreting the results of a mapping of multiple
datasource tables on page 270.

Example: Mapping columns from two datasource tables to one key


columnExample datasource table that contributes a key and non-key
column
In this example, you need to create a target table with columns that come
from two datasource tables.

source1.order_id (key) source1.date

200101 April 2001

200102 January 2000

Data Federator User Guide 265


5 Mapping datasources to targets
Managing relationships between datasource tables

source1.order_id (key) source1.date

200103 March 2003

200104 March 2002

200200 January 2000

444444 January 2000

Table 5-13: Example datasource table that contributes one non-key column

source2.order_id (key) source2.quantity

200101 3

200102 40

200103 12

200104 560

200200 10

555555 10

266 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5
Table 5-14: Example target table schema with columns from two datasource tables

target.order_id (key) target.date target.quantity

[key-values] [values] [values]

In this example, map the columns as follows.

Map this... to...

source1.order_id target.order_id

source1.date target.date

source2.quantity target.quantity

For the difference between mapping source1.order_id or source2.order_id


to target.order_id, see Interpreting the results of a mapping of multiple
datasource tables on page 270.

Adding a relationship when mapping multiple


datasource tables

• You must have added a mapping.

You can add a relationship between datasource tables in the Table


relationships and pre-filters pane.
1. In the tree list, expand your-target-table-name , expand Mapping
rules, then clickyour-mapping-rule-name .

Data Federator User Guide 267


5 Mapping datasources to targets
Managing relationships between datasource tables

The Target tables >your-target-table-name> Mapping rules


>your-mapping-rule-name window appears.

2. In the Table relationships and pre-filters pane, click Add relationship.


The Create datasource relationship frame appears.

3. Edit the relationship formula in the Formula box.


An example relationship is:

S1.A2 = S2.A2

4. Click Save.
The Table relationships and pre-filters shows the relationships.

5. Repeat steps 2-4 until all your datasource tables form a chain.
None of your datasource tables must be left without a relationship to
another table.

The syntax of relationship formulas allows you to use AND to map multiple
relationships between more than two tables simultaneously.

Example: A relationship between two datasource tables


In example Writing mapping formulas when mapping multiple datasource
tables on page 265, add the following relationship.

268 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5

Add the relationship from... to...

source1.order_id source2.order_id

If you have followed the two examples Writing mapping formulas when
mapping multiple datasource tables on page 265 and the current one, the
result is as follows.
• The row (order_id1, date1, quantity1) exists in the target table T if there
is a row with a key of value "order_id1" in the datasource source1, and
there is a row with a key of value "order_id1" in the datasource source2.
• The row (order_id1, date1, null) exists in the target table T if there is a
row with a key of value "order_id1" in the datasource S1, and there is
no row with a key of value "order_id1" in the datasource S2.

Data Federator User Guide 269


5 Mapping datasources to targets
Managing relationships between datasource tables

Table 5-17: Example target table composed of columns from two datasource tables

target.order_id (key) target.date target.quantity

200101 April 2001 3

200102 January 2000 40

200103 March 2003 12

200103 March 2003 500

200200 January 2000 10

444444 January 2000 <null>

Related Topics
• Mapping datasources to targets process overview on page 216
• The syntax of relationship formulas on page 621

Interpreting the results of a mapping of multiple


datasource tables

When you map two or more datasource tables to one target, the result
depends on several factors.

This section demonstrates the effect of the following factors on the result of
a mapping from two datasource tables to a target table:

270 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5
• datasource tables that contribute to the key columns (which are called
"core tables")
• relationships between datasource tables

This section shows how mappings between the same datasource tables and
target can produce different results.

Example: Example of two datasource tables where the second one's


values are optional
In this example, the factors are as follows.

Factor Value For details

For details on core ta-


bles, see "ch531.di-
core tables source1
ta#fm_2006071221_1199008-
eim-titan".

For details on adding


relationships, see Man-
relationships between source1.order_id =
aging relationships be-
datasource tables source2.order_id
tween datasource ta-
bles on page 253.

The result is as follows.


• The row (order_id1, date1, quantity1) exists in the target table T
if there is a row with a key of value order_id1 in the datasource
source1, and there is a row with a key of value order_id1 in the
datasource source2.
• the row (order_id1, date1, null) exists in the target table T if there is
a row with a key of value order_id1 in the datasource source1, and
there is no row with a key of value order_id1 in the datasource
source2.

Data Federator User Guide 271


5 Mapping datasources to targets
Managing relationships between datasource tables

Note:
This is the same example as the one in the procedure Adding a relationship
when mapping multiple datasource tables on page 267

Example: Example of two datasource tables where the first one's values
are optional
In this example, the factors are as follows.

Factor Value For details

For details on core ta-


bles, see "ch531.di-
core tables source2
ta#fm_2006071221_1199008-
eim-titan".

272 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5

Factor Value For details

For details on adding


relationships, see Man-
relationships between source1.order_id =
aging relationships be-
datasource tables source2.order_id
tween datasource ta-
bles on page 253.

The result is as follows.


• The row (order_id1, null, quantity1) exists in the target table T
if there is no row with a key of value order_id1 in the datasource
source1, and there is a row with a key of value order_id1 in the
datasource source2.
• The row (order_id1, date1, quantity1) exists in the target table T
if there is a row with a key of value order_id1 in the datasource
source1, and a row with a key of value order_id1 in the datasource
source2.

Note:
Because the target order_id is mapped from source2, in the result, date
may be NULL.

Data Federator User Guide 273


5 Mapping datasources to targets
Managing relationships between datasource tables

Combining mappings and case statements

This section demonstrates the effect of a case statement on the result of a


mapping from two datasource tables to a target table.

Example: Example of two datasource tables where both values are required
In this example, the factors are as follows. For the schemas, see the
examples Writing mapping formulas when mapping multiple datasource
tables on page 265 and Adding a relationship when mapping multiple
datasource tables on page 267.

Factor Value

core tables source1

274 Data Federator User Guide


Mapping datasources to targets
Managing relationships between datasource tables 5

Factor Value

relationships between datasource


source1.order_id = source2.order_id
tables

The result is as follows.


• The row (order_id1, date1, quantity1) exists in the target table T if there
is a row with a key of value "order_id1" in the datasource source1, and
there is a row with a key of value "order_id1" in the datasource source2.
• The row (order_id1, date1, null) cannot exist in the target table T because
of the formula that denies rows where source2.id is NULL.

Related Topics
• Managing relationships between datasource tables on page 253

Data Federator User Guide 275


5 Mapping datasources to targets
Managing a set of mapping rules

Managing a set of mapping rules


This section shows how to add, delete and modify your mapping rules quickly.

Viewing all the mapping rules

Data Federator lists all the mapping rules in the Mapping rules window.
• Click Target tables, then your-target-table-name, then Mapping
rules.
The Mapping rules window appears, showing a list of your mapping
rules.

Opening a mapping rule

There are two ways to open a mapping rule.


1. Either:
a. Find the mapping rule in the tree list and click its name.
2. Or:
a. Open the Mapping rules window as in Viewing all the mapping rules
on page 276.
b. Click the Edit this mapping rule icon

beside the mapping rule that you want to open.

276 Data Federator User Guide


Mapping datasources to targets
Managing a set of mapping rules 5

The Target tables > your-target-table-name > Mapping rules


> your-mapping-rule-name window appears, where you can
modify your mapping rule.

Copying a mapping rule

Copying a mapping rule is a quick way to add a new mapping rule. When
you copy a mapping rule, the new mapping rule contains the same datasource
tables, lookup tables, and correct mapping formulas as the original mapping
rule.

Note:
Data Federator only copies correct mapping formulas. Therefore, even if
your original mapping rule is incomplete, after you copy, the copied mapping
rule may be complete.
1. Open the Mapping rules window as in Viewing all the mapping rules on
page 276.
2. Click the Copy this mapping rule icon

beside the mapping rule that you want to copy.

Data Federator User Guide 277


5 Mapping datasources to targets
Managing a set of mapping rules

A message box appears. The message asks you to confirm the copy.

When you confirm, the Target tables > your-target-table-name >


Mapping rules > CopyOfyour-mapping-rule-namewindow appears,
where you can modify your new mapping rule.

3. In the General pane, in the Description box, type a new description of


your mapping rule. Data Federator uses all the characters before the first
space of this description as the label of your mapping rule.

Printing a mapping rule

You can print the definition of a mapping rule to a PDF file.


1. Open the Mapping rules window as in Viewing all the mapping rules on
page 276.
2. Click the Print this mapping rule icon

beside the mapping rule that you want to print.

278 Data Federator User Guide


Mapping datasources to targets
Activating and deactivating mapping rules 5

Deleting a mapping rule

Delete a mapping rule when you do not need any of the mapping formulas
between datasources and target tables inside the mapping rule.
1. Open the Mapping rules window as in Viewing all the mapping rules on
page 276.
2. Select the check box beside the mapping rule that you want to delete.
You can also select multiple mapping rules at the same time.

3. Click Delete.
A message box appears. The message asks you to confirm the deletion.

When you confirm, the selected mapping rule is deleted.

Displaying the impact and lineage of mappings


1. Open your mapping rule.
2. Click Impact and Lineage.
The Impact and lineage pane for your mapping rule expands and appears.

Related Topics
• How to read the Impact and lineage pane in Data Federator Designer on
page 52

Activating and deactivating mapping


rules
In order to test your targets, you can deactivate mapping rules that you are
not ready to use.

By default, all mapping rules that you add are activated.

Data Federator User Guide 279


5 Mapping datasources to targets
Testing mappings

Deactivating a mapping rule

You can deactivate a mapping rule from the Target tables > your-target-
table-name > Mapping rules window.
1. In the tree list, expand your-target-table-name, then click Mapping
rules.
2. In the List of mapping rules pane, check the box beside the mapping
rule that you want to deactivate.
3. Click Deactivate.
The mapping rule is deactivated.

The mapping rule appears in gray in the tree list.

In the tree list, you can click your-target-table-name to see that


the target's status has been updated; for example it may pass to "mapped"
if the mapping rule that you deactivated was the only one preventing the
target from being mapped.

Activating a mapping rule

All mapping rules are activated by default.


To activate a mapping rule that you previously deactivated, follow the same
procedure as in Deactivating a mapping rule on page 280, and use the
Activate button instead.

Testing mappings
To test a mapping rule, you must verify if the information you entered allows
Data Federator to correctly populate the target tables.

You can encounter the following problems:


• You have written mapping formula that maps the wrong value.
• Your mapping formulas do not result in sufficient information for your
target columns.

280 Data Federator User Guide


Mapping datasources to targets
Testing mappings 5
• Your mapping formulas result in null values in columns that must not be
NULL.

Data Federator lets you test a mapping rule by using the Query tool pane.

Testing a mapping rule

• You must have added a mapping (see Mapping datasources to targets


process overview on page 216).
• You must have added formulas to map all the values in the target table
(see Writing mapping formulas on page 219).

You can run a query on a mapping rule to test that it is correctly mapping
values to the target table.
1. In the tree list, expand your-target-table-name, expand Mapping
rules, then click your-mapping-rule-name.
The Target tables > your-target-table-name > Mapping rules >
your-mapping-rule-namewindow appears.

2. In the Mapping rule test tool pane, click View data to see the query
results.
For details on running the query, see Running a query to test your
configuration on page 614.

For details on printing the results of the query, see Printing a data sheet
on page 617.

Data Federator displays the data in columns in the Data sheet frame.

3. Verify the values appear correctly.


Otherwise, try adjusting the mapping rule again.

Example tests to run on a mapping rule

Tip:
Example tests to perform on your mapping rule
• Fetch the first 100 rows.

Data Federator User Guide 281


5 Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule

Run a query, as in Testing a mapping rule on page 281, and select the
Show total number of rows only check box.

The number of rows will appear above the query results.


• Fetch a single row.

For example, if you have a target table with a primary key of client_id in
the range 6000000-6009999, type:

client_id=6000114

in the Filter box.

Click View data, and verify the value of each column with the data in your
datasource table.
• Verify that the primary key columns are never NULL.

Type the formula:

client_id <> NULL

If any of the returned columns are NULL, verify that your mapping rule
does not insert NULL values.

Managing datasource, lookup and


domain tables in a mapping rule
This section shows how to add, delete, or view the contents of the tables
that participate in a mapping rule. These can be datasource tables, lookup
tables, or domain tables.

Adding a table to a mapping rule

The following procedure shows how to add any type of table to a mapping
rule.
1. Edit the mapping rule.
a. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name

282 Data Federator User Guide


Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule 5
The your-mapping-rule-name window appears.
2. In the Table relationships and pre-filters pane, click Add table.
A new window Add a table to the mapping appears:

3. Select a specific table.


The name of the selected table appears in the Selected table field.
4. By selecting the appropriate checkboxes as required, define its alias,
whether it should be a core table and whether it should have distinct rows.
5. Click OK to add the table to the mapping rule.

Data Federator User Guide 283


5 Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule

Replacing a table in a mapping rule

The following procedure shows how to replace any type of table in a mapping
rule.
1. Edit the mapping rule.
a. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Either:
a. Select the table to be replaced and click Edit.
A new window Edit the mapping source appears:

284 Data Federator User Guide


Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule 5

3. Or:
a. Right-click the table to be replaced.
A context-sensitive menu appears:

b. Click Edit.
A new window Edit the mapping source appears:

Data Federator User Guide 285


5 Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule

4. Expand the Tables tree list and select the replacement table.
The name of the selected table appears in the Replace with table field.
5. Click OK to add the replacement table to the mapping rule.

Deleting a table from a mapping rule

The following procedure shows how to delete any type of table from a
mapping rule.
1. Edit the mapping rule.
a. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Right-click the table to be deleted.
A context-sensitive menu appears:

3. Click Remove.
4. Click OK.
The selected table is deleted from the mapping rule, but a reference to it
remains.
Note:
The target table itself is not deleted.

Viewing the columns of a table in a mapping rule

The following procedure shows how to view the columns of any type of table
when it is part of a mapping rule.

286 Data Federator User Guide


Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule 5
• Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name, then Tables, then your-ta
ble-name.
The columns in the expanded table appear.

Setting the alias of a table in a mapping rule

The following procedure shows how to set the alias of a table in a mapping
rule.
1. Edit the mapping rule.
a. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name
The your-mapping-rule-name window appears.
2. Either:
a. Select the table to be replaced and click Edit.
A new window Edit the mapping source appears:

Data Federator User Guide 287


5 Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule

3. Or:
a. Right-click the table to be replaced.
A context-sensitive menu appears:

b. Click Edit.
A new window Edit the mapping source appears:

288 Data Federator User Guide


Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule 5

4. In the Properties panel, enter an alias in the Update table alias field
and click OK.
The alias of the table in the mapping rule is set.

Restricting rows to distinct values

The following procedure shows how to restrict values in a table used in a


mapping rule to distinct values only.
1. Edit the mapping rule.
a. Click Target tables, then your-target-table-name, then Mapping
rules, then your-mapping-rule-name

Data Federator User Guide 289


5 Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule

The your-mapping-rule-name window appears.


2. Either:
a. Select the table to be replaced and click Edit.
A new window Edit the mapping source appears:

3. Or:
a. Right-click the table to be replaced.
A context-sensitive menu appears:

290 Data Federator User Guide


Mapping datasources to targets
Managing datasource, lookup and domain tables in a mapping rule 5

b. Click Edit.
A new window Edit the mapping source appears:

4. Select the distinct rows check box.


5. Click OK to restrict the values in the selected table to distinct values only.

Data Federator User Guide 291


5 Mapping datasources to targets
Details on functions used in formulas

Details on functions used in formulas


See Function reference on page 624 for a list of functions in Data Federator.

292 Data Federator User Guide


Managing constraints

6
6 Managing constraints
Testing mapping rules against constraints

Testing mapping rules against constraints


This section describes how to test your mappings in Data Federator. This is
a way to test the integrity of the data in order to improve the definition of your
mapping rules, before you put a project into production.

Defining constraints on a target table


This section describes how to define the constraints that you want to check
on your mapping rules.

You can define constraints once you have defined a target table.

Types of constraints

Data Federator Designer checks several pre-defined constraints on mapping


rules, and lets you define custom constraints.

This table describes the types of constraints that you can run in Data
Federator Designer.

Test Description

checks if the values of the key in the


key constraint
target table are unique

checks if the values of a column in


NOT-NULL constraint
the target table are not null

checks a formula that you define on


custom constraint
the columns of a target table

294 Data Federator User Guide


Managing constraints
Defining constraints on a target table 6
Test Description

checks if the values in an enumerat


ed column are in the associated do-
main table

domain constraint Every enumerated column has a do-


main table. This domain table defines
the valid values for the column. The
domain constraint checks that all the
columns values are in the domain.

Related Topics
• Defining key constraints for a target table on page 295
• Adding a domain table to enumerate values in a target column on page 55

Defining key constraints for a target table

You define a key constraint when you create the schema of the target table.
1. In the tree list, click your-target-table-name.
The Target tables > your-target-table-namewindow appears.

2. Select the Key check box for each column that you want to define as a
key.
3. Click Save.
When you click Constraints > your-key-constraint-namein the
tree list, in the Constraint checks pane, all mapping rules that have the
status "completed" appear in the list.

Defining not-null constraints for a target table

You define a NOT-NULL constraint when you create the schema of the target
table.

Data Federator User Guide 295


6 Managing constraints
Defining constraints on a target table

1. In the tree list, click your-target-table-name.


The Target tables > your-target-table-namewindow appears.

2. Select the Not null check box for each column on which you want to
define a NOT-NULL constraint.
3. Click Save.
When you click Constraints > your-column-name_not_null in the tree
list, in the Constraint checks pane, all mapping rules that have the status
completed appear in the list.

Defining custom constraints on a target table

• You must have added a mapping rule (see Adding a mapping rule for a
target table on page 217).

You can define a custom constraint on a target table by writing a constraint


formula.
1. In the tree list, expand your-target-table-name, then Constraints.
The your-target-table-name> Constraints window appears.

2. Click Add.
The your-target-table-name> Constraints > your-constraint-
name > New constraint window appears.

3. In the General pane, type a name and description for your constraint.
4. In the Constraint definition pane, select a type for your constraint and
enter a constraint formula.
5. Click Save.
The custom constraint is added to the set of available constraints.

In the Constraint checks pane, all mapping rules that have the status
"completed" appear in the list.

Syntax of constraint formulas

Use the following rules when writing a constraint formula:

296 Data Federator User Guide


Managing constraints
Defining constraints on a target table 6
• Write a formula that returns a BOOLEAN value.
• Refer to columns by their aliases. The alias is either an id number or a
name (An or [column_name]).
• Use the Data Federator functions to construct the column values or
constants.

Example: Basic functions you use in a constraint formula

Table 6-2: Examples of basic functions in a constraint formula

To do this... use the formula...

check if a date is later than 01-01-


date > '01-01-1970'
1970

For a full list of functions that you can use, see Function reference on
page 624.

For details about the data types in Data Federator, see Using data types and
constants in Data Federator Designer on page 604.

Configuring a constraint check

Use these parameters when Computing constraint violations on page 300 or


Computing constraint violations for a group of mapping rules on page 301.

Parameter Description

lists the available columns that you


Available columns
can test

Data Federator User Guide 297


6 Managing constraints
Defining constraints on a target table

Parameter Description

lists the columns that you have cho-


sen to test
• Click on the columns in the
Available columns list to add
columns to your constraint. They
appear in the Selected columns
list.
• Click on the columns in the Select-
ed columns list to remove
Selected columns columns from your constraint.
• Click All to add all the columns to
your constraint.
• Click None to remove all the
columns from your constraint.
• Click Default to add the default
columns to your constraint. The
default columns for a constraint
depend on the constraint type.

the column by which the constraint


Sort by
results are ordered

the order in which Data Federator


Sort order
displays the constraint rows

specifies how many rows you want


your constraint to return
Retrieved rows Use this box to limit the size of the
returned data when your constraint
may return a large number of rows.

298 Data Federator User Guide


Managing constraints
Checking constraints on a mapping rule 6
Parameter Description

specifies if you want your constraint


to return a count of all violations
Compute total number of constraint
violations This option will retrieve more detailed
information, but it will double the
processing time of the query.

Checking constraints on a mapping rule


This section describes how to check the constraints that you defined on a
single mapping rule.

The purpose of analyzing constraint violations

Analyzing constraint violations indicates what you should do to improve the


integrity of the data that a mapping rule returns.

To analyze the constraint violations a mapping rule returns, you must check
constraints on the mapping rule.

Tip:
Resolving constraint violations in a constraint check

There are several ways to resolve a failed constraint.


• Change the mapping rule to handle the cases revealed by the constraint
violations.
• Filter the constraint violations to help organize them into smaller subsets.
• Change the constraint violations in the source data, if the source data is
erroneous.
• Relax the constraint, in the case that it rejects useful data.

Data Federator User Guide 299


6 Managing constraints
Checking constraints on a mapping rule

Related Topics
• Checking constraints on a mapping rule on page 299
• Viewing constraint violations on page 303
• Mapping datasources to targets process overview on page 216
• Filtering constraint violations on page 302

Computing constraint violations

• You must have defined the constraints for the mapping rule.

See:
• Defining key constraints for a target table on page 295,
• Defining not-null constraints for a target table on page 295 or
• Defining custom constraints on a target table on page 296.

Data Federator lets you compute all the rows in a target table that do not
satisfy a constraint.
1. In the tree list, expand Target tables, then your-target-table-name,
then Constraints, then click your-constraint-name .
The Target tables > your-target-table-name > Constraints >
your-constraint-namewindow appears.

2. In the Constraint checks pane, click the Edit contents icon

beside the mapping rule you want to test.


The Target tables > your-target-table-name > Constraints >
your-constraint-name > your-mapping-rule-namewindow
appears.

3. In the Query check pane, click Check constraint.


In the Current check results pane, the Number of constraint violations
box displays the number of violations that caused the constraint to fail,
and the date of the check is displayed in the Launch date box.

300 Data Federator User Guide


Managing constraints
Checking constraints on a mapping rule 6
The value ">n" appears in the Number of constraint violations box if
the number in the Retrieved rows box is "n", and the number of constraint
violations is greater than "n".

For details on the settings of the Query check pane, see Configuring a
constraint check on page 297.

4. Enter your comments about this check in the Comments box.


5. In the Current check results pane, click View results.
The rows in your mapping rule that fail the constraint appear in the Data
sheet frame.

Computing constraint violations for a group of


mapping rules

• You must have defined the constraints for the mapping rule.

See:
• Defining key constraints for a target table on page 295,
• Defining not-null constraints for a target table on page 295 or
• Defining custom constraints on a target table on page 296.

Data Federator lets you compute all the rows in a target table that do not
satisfy a constraint.
1. In the tree list, expand Target tables, then your-target-table-name,
then Constraints, then click your-constraint-name.
The Target tables > your-target-table-name > Constraints >
your-constraint-name window appears.

2. In the Constraint checks pane, select the check boxes beside the
mapping rules you want to test.
3. Click Check constraints.
In the Constraint checks pane, the date of the check is displayed in the
Launch date column, and the number of violations that caused the
constraint to fail is displayed in the Violations column: zero, an integer,
or the value ">n".

Data Federator User Guide 301


6 Managing constraints
Checking constraints on a mapping rule

If you have:
• constraint violations, Viewing constraint violations on page 303.
• no constraint violations, Marking a mapping rule as validated on page 303.

Filtering constraint violations

You must have computed the constraint violations for at least one constraint
(see: Computing constraint violations on page 300).

You can filter constraint violations into subsets in order to help you analyze
their sources of error.
1. In the Target tables > your-target-table-name > Constraints >
your-constraint-name > your-mapping-rule-name window, in
the Filtered constraint violations pane, click Add.
The Target tables > your-target-table-name > Constraints >
your-constraint-name > your-mapping-rule-name > Filtered
constraint violations window appears.

2. In the Constraint definition pane, in the Filter on constraint violations


box, type a formula to filter the constraint violations.
For example, if your constraint returns all dates that are greater than the
year 2000, you can enter then formula date > toDate( '2006-01-01'
).

3. In the Query check pane, click Check constraint.


In the Current check results pane, the Number of constraint violations
box shows the number of only those violations that match the filter.

4. In the Current check results pane, click View results.


The Data sheet frame appears, showing the filtered violations.

5. Click Save.
Your filter is saved.

You can make as many filters as you need to organize and help you treat
the constraint violations.

302 Data Federator User Guide


Managing constraints
Checking constraints on a mapping rule 6

Marking a mapping rule as validated

After you have checked a constraint, and you are sure that a mapping rule
has satisfied the constraint formula, you can mark the mapping rule as
validated.

Data Federator remembers which constraints you marked as validated for


each mapping rule. Once you mark a mapping rule as validated for all the
required constraints, Data Federator changes the status of the rule to tested.

When a mapping rule is in the status tested, it can be deployed on the Data
Federator Query Server.
1. In the tree list, expand your-target-table-name, then Constraints,
then click your-constraint-name.
The your-target-table-name > Constraints > your-constraint-
name window appears.

2. Select the check box beside the mapping rule that you want to mark as
validated.
3. In the Constraint checks pane, click Validate.
The mapping rule is marked as validated for this constraint.

Related Topics
• Deploying a version of a project on page 324

Viewing constraint violations

• You must have checked the constraint at least once (see Computing
constraint violations on page 300).

Each time you checked a constraint, its results are stored so you can read
them without checking the constraint again.
You can view the stored results of the most recent constraint check.
1. In the tree list, expand Target tables, then your-target-table-name,
then Constraints, then click your-constraint-name.

Data Federator User Guide 303


6 Managing constraints
Checking constraints on a mapping rule

The Target tables > your-target-table-name > Constraints >


your-constraint-namewindow appears.

2. In the Constraint checks pane, click the Edit contents icon

beside the mapping rule you want to test.


The Target tables > your-target-table-name > Constraints >
your-constraint-name > your-mapping-rule-namewindow
appears.

3. In the Current check results pane, click View results.


The result of the most recently checked constraint appears in the Data
sheet frame.

For details on printing the results of the constraint, see Printing a data
sheet on page 617.

The Constraint checks pane

The Constraint checks pane shows the characteristics of each mapping


rule for one constraint.

304 Data Federator User Guide


Managing constraints
Checking constraints on a mapping rule 6
Column Description

the mapping rules on which this con-


Mapping rule
straint must be checked

the date of the last check of this


Launch date
constraint

the number of violations that caused


Violations
the constraint to fail the last check

the way that Data Federator calculat-


ed the last check
• Enforced

specifies that Data Federator


checks this constraint by examin-
ing the structure of your mapping
rule; It is not necessary to exam-
ine the data.
• Violated

specifies that Data Federator cal-


Analysis
culates that this constraint is failed
by the structure of the mapping
rule; It is not necessary to exam-
ine the data.

These types of checks are faster


than a standard check.
• <empty>

specifies that Data Federator


must examine the data to check
this constraint

Data Federator User Guide 305


6 Managing constraints
Reports

Column Description

whether the mapping rule has been


Validated
validated

Reports
This section describes the reports you can run on your constraints. It consists
of:
• "Generating a constraint report".

306 Data Federator User Guide


Managing projects

7
7 Managing projects
Managing a project and its versions

Managing a project and its versions


This section describes how to manage the versions of a project while it is
under development.

While a project is in development, you can store and modify multiple versions
of it. When you are ready to deploy the project, you choose one of the
versions and deploy it to a Data Federator Query Server. Data Federator
also keeps a list of all the deployments you have made.

For details on deploying projects, see Deploying projects on page 321.

The user interface for projects

The following diagram shows what you see on the Data Federator user
interface when you work with projects:

The main components of the user interface for working with projects are:
• (A) the Projects tab, where you switch from a list of the projects to the
current project

308 Data Federator User Guide


Managing projects
Managing a project and its versions 7
• (B) the tab for the current version of the project (prj_accounts,
prj_chile, prj_customer_satisfaction)
• (C) the tree view, where you open the configuration of your projects
• (D) the main view, where you see a list of your projects, with their names
and descriptions
• (E) the checkbox, where you select the project you want to open

The life cycle of a project

A project can have multiple simultaneous versions:


• the current version
• several stored versions (archive files)
• several deployed versions

Note:
The current version is automatically saved into an archive file called Latest
Version.

The following table summarizes the life cycle of a project:

The version... means... what you can do...

Store the project.

Your working version of


Include an archive file
the project that can be
current or a deployed version of
modified in Data Feder-
another project.
ator Designer

Deploy the project.

Data Federator User Guide 309


7 Managing projects
Managing a project and its versions

The version... means... what you can do...

Load the archive file to


make it the current ver-
A past version of a sion.
archive file project that has been
stored on the server
Download the archive
file on your file system.

Load the archive file to


A version that has been make it the current ver-
deployed on Data Feder- sion.
deployed ator Query Server and
can be used by applica-
tions Download the archive
file on your file system.

Related Topics
• Opening a project on page 41
• Including a project in your current project on page 315
• Deploying a version of a project on page 324
• Downloading a version of a project on page 313
• Downloading a version of a project on page 313

Editing the configuration of a project

You can only open a project that is not locked by another user account. If it
is locked, wait for the other user account to unlock the project, or wait until
the other user account's session expires.

To define a project and to manage its versions, you must select the project.
1. Either:
a. Select the tab of the project.
The project's Configuration window appears.

310 Data Federator User Guide


Managing projects
Managing a project and its versions 7
Note:
The project is selected and you can manage its versions and its
description.

2. Or
a. At the top of the window, click Projects
The list of projects appears.
b. In the tree list, select the project.

The project's Configuration window appears.

Note:
The project is selected and you can manage its versions and its
description.

Related Topics
• Unlocking projects on page 43
• Adding a project on page 41

Storing the current version of a project

You can store the current version of a project in an archive file.

You can always load a version of a project that you have stored in an archive.
This lets you manage important project versions.
1. Open the project.
2. From the Store drop-down arrow, select Entire project.
You can also store the current version of selected target tables.
The New archive file window appears.
3. Enter a name and description in the Name and Description fields
respectively, and click Store.

Data Federator User Guide 311


7 Managing projects
Managing a project and its versions

Data Federator saves the current version of your project in an archive.


The archive appears in the tree list of the Projects tab, and in the Archive
files pane.
Note:
Once you have stored a version of a project, you can download its archive
file to your file system.

Related Topics
• Opening a project on page 41
• Loading a version of a project stored on the server on page 314
• Loading a version of a project stored on your file system on page 315
• Storing the current version of selected target tables on page 312
• Downloading a version of a project on page 313

Storing the current version of selected target tables

As well as storing the current version of selected tables in an archive file,


you can also store the entire current version of a project.

You can always load a version of a project that you have stored in an archive.
This lets you manage important project versions.
1. Open the project.
2. From the Store drop-down arrow, select Select targets.
The New archive file window appears.
3. Enter a name and description in the Name and Description fields
respectively.
4. In the Targets selection pane, in the Select the targets to archive box,
select the check boxes beside the targets you want to archive.
When you select a target, its dependencies appear in the Dependencies
of box.

A dependency is a target that the selected target uses as a source.

You can click the name of a target in the Select the targets to archive
box to show its dependencies without selecting it.

Use the check box at the top of the Select the targets to archive box to
select all the displayed targets.

312 Data Federator User Guide


Managing projects
Managing a project and its versions 7
5. Click Store.
Data Federator saves the selected targets of your project in an archive.
The archive appears in the tree list of the Projects tab, and in the Archive
files pane.

Note:
Once you have stored a version of a project, you can download its archive
file to your file system.

Related Topics
• Opening a project on page 41
• Loading a version of a project stored on the server on page 314
• Loading a version of a project stored on your file system on page 315
• Storing the current version of a project on page 311
• Downloading a version of a project on page 313

Downloading a version of a project

You must have stored a version of your project.

You can download an archive file or a deployed version of a project to your


file system. You can use this downloaded file to import into a different
installation of Data Federator Designer.
1. Edit the configuration of the project.
2. Expand the Archive files pane, and click the Download archive file icon
of the project to be downloaded.

Data Federator User Guide 313


7 Managing projects
Managing a project and its versions

A browser dialog opens, asking you if you want to save the archive file.
3. Save the archive file on your file system.

Related Topics
• Editing the configuration of a project on page 310
• Storing the current version of a project on page 311

Loading a version of a project stored on the server

Data Federator lets you load a deployed version of a project if you want to
modify it. When you edit a deployed version, you must deploy it again before
your changes take effect.
1. Open the project.
2. Click Load.
3. Select the Archive on server radio button, then in the Select archive
or deployed version pane, expand your-project-name.
4. Click the name of the archive or deployed version you want to load.
The name of the clicked version appears in the Name box. The description
and creation date also appear.
5. Click Save.
Data Federator loads the contents of the deployed version and it becomes
the current version.

314 Data Federator User Guide


Managing projects
Managing a project and its versions 7
Related Topics
• Opening a project on page 41

Loading a version of a project stored on your file


system

Data Federator lets you load a deployed version of a project if you want to
modify it. When you edit a deployed version, you must deploy it again before
your changes take effect.
1. Open the project.
2. Click Load.
3. Select the Archive on file system radio button, and click Browse.
The Choose file window appears.
4. Navigate to and click the name of the archive or deployed version you
want to load, and click Open.
You are returned to the Load from an archive window with the path of
your selected archive displayed in the Archive file field.
5. Click Save.
Data Federator loads the contents of the deployed version and it becomes
the current version.

Related Topics
• Opening a project on page 41

Including a project in your current project

You can merge the contents of an archive file or a deployed version with the
contents of the current version of a project.
1. Open the project.
2. Click Include.
3. Select an archive to include.
You can select an archive either from the Data Federator server or from
your file system. You can do both of these the same way as loading a
version of a project.

Data Federator User Guide 315


7 Managing projects
Managing a project and its versions

4. Choose how Data Federator treats included components that have the
same names as existing ones.

Table 7-2: How Data Federator treats included components that have the same names
as existing ones

When you want to ... do this ...

Clear both check boxes: Keep ex-


isting datasources, domain and
lookup tables when names match
replace existing datasource tables, and Keep existing mappings
domain tables, lookup tables and when target table names match.
target tables including their map-
pings when names match This will replace datasource tables,
domain tables, lookup tables and
mappings when components of the
same name are included.

Select the Keep existing data-


sources, domain and lookup ta-
bles when names match check
box. This will keep the existing
datasource, domain and lookup ta-
bles when the ones in the included
file have the same names.

However, if the existing datasource


keep existing datasources, domain
has no draft version, then the final
and lookup tables when names
version of the matching datasource
match
becomes the draft version.

An error message appears if an in-


cluded mapping rule points to an
included datasource table whose
schema changes because it has
the same name as an existing
datasource table. You will need to
correct this.

316 Data Federator User Guide


Managing projects
Managing a project and its versions 7
When you want to ... do this ...

Select the Keep existing map-


pings when target table names
match check box. This will keep
the existing mappings when the
target tables in the included file
have the same names as the exist-
ing target tables.

An error message appears if an


existing mapping rule points to an
existing datasource table whose
keep existing mappings when tar-
schema changes because it has
get table names match
the same name as an included
datasource table. You will need to
correct this.

Furthermore, an existing mapping


rule may not be valid if it points to
an existing datasource table whose
name matches an included data-
source table, and is therefore re-
placed. This will also need to be
corrected.

The Data Federator procedure for treating included components is


illustrated in the image below, in the example where neither the Keep
existing datasources, domain and lookup tables when names match
or Keep existing mappings when target table names match are
selected.

Note that the large, shape-containing squares represent target tables,


and the small rectangles, mappings in target tables:

Data Federator User Guide 317


7 Managing projects
Managing a project and its versions

5. Choose how Data Federator treats the status of included components.

Table 7-3: How Data Federator treats the status of included components

When you want to ... do this ...

try to flag mapping rules in all includ-


ed target tables as validated;

Data Federator flags included tar-


get tables as validated if they have
check the box: Validate all target
the status mapped.
tables
This option is useful when you work
on multiple projects separately, and
you have already validated the
project that you want to include.

318 Data Federator User Guide


Managing projects
Managing a project and its versions 7
When you want to ... do this ...

leave mapping rules in included


target tables with the flag "validat-
ed" or "invalidated" that they had
before you included them clear the box: Validate all target
This option is useful when you work tables
on multiple projects separately, and
you want to validate all target tables
on a master project.

6. Click Save.
Data Federator includes the archive file into the project, the Projects >
your-project-name window appears, and a message is displayed
advising whether the project was included successfully.

Related Topics
• Opening a project on page 41
• Loading a version of a project stored on your file system on page 315
• Loading a version of a project stored on the server on page 314
• Mapping datasources to targets process overview on page 216

Opening multiple projects

You can only open a project that is not locked by another user account. If it
is locked, wait for the other user account to unlock the project, or wait until
the other user account's session expires.

You can open multiple projects at the same time by selecting them from the
"Projects" window.
1. At the top of the window, click the Projects tab.
2. In the tree list, click Projects.

Data Federator User Guide 319


7 Managing projects
Managing a project and its versions

The Projects window appears.

3. Click the checkbox beside the projects that you want to open.
4. Click Open.
A tab appears for each project you opened. Each tab contains the latest
version of the project.

Click the tab of the project that you want to work on.

Once your project is opened, you can add targets, datasources and
mappings to it.

Related Topics
• Opening a project on page 41
• Managing target tables on page 46
• About datasources on page 66
• Mapping datasources to targets process overview on page 216

Exporting all projects

You can export all of the projects at once from the Projects tab.
1. Above the tree view, click the Projects tab.
The Projects window appears.
2. Click Export all projects.
Data Federator creates an archive file whose name begins with
projects_export_. The message "Successfully exported all projects"
appears.
3. Click Download the archive file to download the projects.
Your browser asks you if you want to save the archive file.
4. Save the archive file using your browser.

320 Data Federator User Guide


Managing projects
Deploying projects 7
Related Topics
• Storing the current version of a project on page 311

Importing a set of projects

• You must have exported a set of projects.


• Make sure that the project is not locked by another user account. If it is,
you can unlock it.

You can import a set of projects you previously exported.


1. Select the Projects tab.
The Projects window appears.
2. Click Import projects.
The Import projects from the file system window appears.
3. Click Browse and select an archive file you previously exported.
Note:
To overwrite all the projects listed in the tree view, select the Overwrite
existing projects checkbox.

4. Click Save.
Data Federator imports the set of projects contained in the selected
archive file.

Related Topics
• Unlocking projects on page 43
• Exporting all projects on page 320

Deploying projects
You deploy a project when you want other applications to query its tables.
When you deploy a project, it becomes a catalog on Data Federator Query
Server. The datasource, target, lookup and domain tables in the project
become tables in the catalog.

When you use the default settings to deploy a project, it is deployed on a


local installation of Data Federator Query Server. You can change these

Data Federator User Guide 321


7 Managing projects
Deploying projects

settings to deploy on a remote installation of Query Server, or on a cluster


of servers.

Related Topics
• Deploying a version of a project on page 324

Servers on which projects are deployed

If you want to deploy on a remote installation of Query Server, you must first
configure Data Federator so that it can connect to a remote server.

If you want to deploy the project onto a server cluster, click the Add servers
button and add the details of each of the cluster servers.

This configures Data Federator Designer to deploy the project on the servers
for which you provided the details. Each server uses the same datasource
connection parameters, and accesses the same datasource.

Note:
When deploying to a cluster, if deployment to one of the servers fails for any
reason, the deployment is not rolled back. That is, the project is deployed to
all servers in the cluster except the server or server where the deployment
fails.

Related Topics
• Configuring Data Federator Designer to connect to a remote Query Server
on page 584
• Sharing Query Server between multiple instances of Designer on page 585

User rights on deployed catalogs and tables

When you deploy a new project, your user account is automatically granted
"select" and"undeploy" rights on that project.

If you want other user accounts to query a project, your administrator must
create those user accounts and give them authorizations to read the tables
in your project.

Authorizations can apply to catalogs, schemas, tables and columns.

322 Data Federator User Guide


Managing projects
Deploying projects 7
When you redeploy a project, the current authorizations are examined and
updated automatically. For example, any privilege that references a table or
column that no longer exists is automatically removed. This privilege is called
a deprecated privilege.

Related Topics
• About user accounts, roles, and privileges on page 504

Storage of deployed projects

When you deploy a version of a project to Query Server, Data Federator


also stores the version as an archive file on the Data Federator server. When
storing, Data Federator allows you to enter a title and description for the
archive.

You can also store versions of projects to the file system, and open versions
that you have stored either on the Data Federator server or on the file system.

Related Topics
• Managing a project and its versions on page 308

Version control of deployed projects

When you deploy a version of a project:


• The version becomes a deployed version on Data Federator Query Server.

Data Federator allows you to enter a title and description for the archive.
This lets you maintain a history of all the projects that you have deployed.
You can also import an old version of a project.
• Data Federator creates a catalog on Data Federator Query Server using
the name you chose for the catalog. This overwrites any previous catalog
of the same name.

For example, if you deployed projectA in the catalog OP, and you deploy
projectB in the catalog OP, projectA is overwritten.

Related Topics
• Managing a project and its versions on page 308

Data Federator User Guide 323


7 Managing projects
Deploying projects

Deploying a version of a project

Before you deploy a project, ensure that you have:


• added a datasource and made it final
• added at least one target table
• created a mapping between the datasource and at least one target table

Note:
You can deploy an empty project, but it is not useful until you have done the
above.
This procedure shows how to deploy your project to Data Federator Query
Server. When you deploy a project on Query Server, your datasources and
target tables can be queried by an application that connects to Query Server.
1. Open the project.
2. Click Deploy.
3. In the Default deployment address pane, enter the deployment options.
Note:
Choose a unique catalog name for the project. If you choose a catalog
name that already has a project deployed in it, you will overwrite the
existing catalog.

4. Click Deploy current version.


The Projects >your-project-name> New deployed version window
appears.

5. Type a name and description for the deployed version in the General
pane.
6. Click Save.
Your project is accessible for querying through Data Federator Query
Server, at the catalog name that you specified. Target tables are available
in the schema named targetschema, while datasource tables are available
in the schema named source.

Note:
Any previous deployment in the same catalog is overwritten.

324 Data Federator User Guide


Managing projects
Deploying projects 7
For example, if you deployed projectA in the catalog MyCatalog, and
you deploy projectB in the catalog MyCatalog, projectA is
overwritten.

The target tables are deployed as indicated by the option Deploy only
integrated target tables.

Related Topics
• Opening a project on page 41
• Managing target tables on page 46
• Making your datasource final on page 212
• Mapping datasources to targets process overview on page 216
• Servers on which projects are deployed on page 322
• User rights on deployed catalogs and tables on page 322
• Storage of deployed projects on page 323
• Version control of deployed projects on page 323
• Reference of project deployment options on page 327
• Using deployment contexts on page 325

Using deployment contexts

Deployment contexts allow you to easily deploy a project on multiple servers.


Using deployment contexts, you can define multiple sets of datasource
connection parameters to use with a project's deployment. Each deployment
context represents a different server deployment.

For example, you can define a deployment context for a group of datasources
running on a development server, and another deployment context for the
same group of datasources running on a production server.

When you define the connection parameters for a datasource, in place of


the configuration values, you use the corresponding parameter name. At
deployment time, you select a deployment context, and Data Federator
substitutes the appropriate values for the connection.

Within each deployment context that you define for a project, you use an
identical set of deployment parameter names to define the connection
parameters common to each datasource. You then use these names in your
datasource definition rather than the actual values, and at deployment time,

Data Federator User Guide 325


7 Managing projects
Deploying projects

Data Federator substitutes the values corresponding to the deployment type


that you select.

The deployment parameters that you can use with a datasource definition
depends on the connection's resource type.

Related Topics
• Defining deployment parameters for a project on page 155
• Defining a connection with deployment context parameters on page 156

326 Data Federator User Guide


Managing projects
Deploying projects 7

Reference of project deployment options

Parameter Description
Server address and • specifies the address and port of the Data
Server port Federator Query Server on which you want to
deploy your project

These are default options, and you can change


them each time you deploy a project.

Username and Pass- • specifies the username and corresponding


word password of the account used to access the
Data Federator Query Server

The Data Federator Query Server administrator


sets up this account.

These are default options, and you can change


them each time you deploy a project.

Catalog name
specifies the name that you want your project to
have on Data Federator Query Server

When you deploy your project, it becomes a cata-


log on the Data Federator Query Server. This op-
tion lets you name the catalog.
Note:
If you deploy two or more projects on the same
installation of Data Federator Query Server, you
must use a different catalog name for each project
in order to distinguish them.

For example, if you deployed projectA in the


catalog OP, and you deploy projectB in the cat-
alog OP, projectA is overwritten.

This is a default option, and you can change it


each time you deploy a project.

Data Federator User Guide 327


7 Managing projects
Deploying projects

Parameter Description
Add servers button
displays the option to add a server or servers,
when you are deploying to a cluster

Move to button
lets you move a server within a cluster of servers

For cluster deployments, select a server, and use


this button to re-locate the server position in the
list.

328 Data Federator User Guide


Managing changes

8
8 Managing changes
Overview

Overview
This section describes the impact of your changes in Data Federator.
When working on a project in Data Federator, you can make changes to the
following components.
• targets
• mappings
• datasources
• lookup tables
• domain tables
• constraint checks

The type of change you make to each of these components can impact some
of the other components. This section lists what you should verify for each
type of change.

Verifying if changes are valid

Data Federator Designer displays an icon

to indicate values that you must fix before your changes are complete.

When you see this icon, click it to view a detailed message.

330 Data Federator User Guide


Managing changes
Modifying the schema of a final datasource 8

Modifying the schema of a final


datasource
When you modify a datasource, verify the components in the following table.

To modify a final datasource, see the procedure at Editing a final datasource


on page 213.

Component What to verify How to verify it

Edit the relationships in


the Table relationships
and pre-filters, and
Did you change a col- verify if any of the rela-
umn that has a relation- tionships reference the
mappings column that you
ship to another data-
source table? changed.

See Finding incomplete


relationships on
page 254.

Data Federator User Guide 331


8 Managing changes
Modifying the schema of a final datasource

Component What to verify How to verify it

Look for the icon

Do the mapping rules


directly reference data- that indicates an invalid
source columns that formula.
have changed?
See Mapping data-
sources to targets pro-
cess overview on
page 216.

Look for the icon

Do the mapping rules


directly reference data- that indicates an invalid
source columns that no formula.
longer exist?
See Mapping data-
sources to targets pro-
cess overview on
page 216.

Look for the icon

Do the mapping formu- that indicates an invalid


las expect types that formula.
have changed?
See Mapping data-
sources to targets pro-
cess overview on
page 216.

332 Data Federator User Guide


Managing changes
Deleting an installed datasource 8
Component What to verify How to verify it

Does the constraint


check find constraint vi- If the mappings are af-
constraint checks olations because of fected, check that the
changes to the map- constraints still pass.
pings?

Check the schema of


your lookup table to
make sure that the
columns still match the
datasource.

Do the lookup tables If they do not match,


map a column that you create a lookup table
lookup tables
modified in the data- with a schema that
source? matches your new data-
source.

See Mapping values


between a datasource
table and a domain ta-
ble on page 248.

Deleting an installed datasource


When you delete a datasource, verify the components in the following table.

To delete a final datasource, see the procedure at Deleting a datasource


on page 210.

Data Federator User Guide 333


8 Managing changes
Deleting an installed datasource

Component What to verify How to verify it

Edit the relationships in


the Table relationships
and pre-filters, and
Did you delete a data- verify if any of the rela-
source table that partici- tionships reference the
mappings pates in a relationship datasource table that
to another datasource you deleted.
table?
See Finding incomplete
relationships on
page 254.

Look for the icon

Do the mapping rules


directly reference data- that indicates an invalid
source columns that no formula.
longer exist?
See Mapping data-
sources to targets pro-
cess overview on
page 216.

Does the constraint


check find constraint vi- If the mappings are af-
constraint checks olations because of fected, check that the
changes to the map- constraints still pass.
pings?

334 Data Federator User Guide


Managing changes
Modifying a target 8
Component What to verify How to verify it

Edit the relationships in


the Table relationships
and pre-filters, and
verify if any of the rela-
tionships reference the
datasource that you
Do the lookup tables deleted.
map a column from a
lookup tables You may need to add a
datasource you delet-
different datasource,
ed?
and a different lookup
table, to complete the
relationship.

See Finding incomplete


relationships on
page 254.

Modifying a target
To modify a target, see Adding a target table manually on page 46.

When you modify a target, verify the following components.

Data Federator User Guide 335


8 Managing changes
Modifying a target

Component What to verify How to verify it

You must consider what


result a query will return
when you run it on the
new definitions of the
keys.
Did you change the def-
Check if the new defini-
mappings initions of the primary
tion of your keys yields
keys?
a different result.

See Mapping data-


sources to targets pro-
cess overview on
page 216.

Look for the icon

Do the mapping formu- that indicates an invalid


las result in types that formula.
have changed?
See Mapping data-
sources to targets pro-
cess overview on
page 216.

Does the constraint


check find constraint vi- If the mappings are af-
constraint checks olations because of fected, check that the
changes to the map- constraints still pass.
pings?

336 Data Federator User Guide


Managing changes
Adding a mapping 8
Adding a mapping
To add a mapping, see Mapping datasources to targets process overview
on page 216.
When you add a mapping, verify the following components.

Component What to verify How to verify it

Does the key constraint


check find constraint vi-
olations among the set Check that the key con-
constraint checks
of mapping rules be- straints still pass.
cause of the new map-
ping?

Modifying a mapping
To modify a mapping, see Mapping datasources to targets process overview
on page 216.

When you modify a mapping, verify the following components.

Component What to verify How to verify it

Does the constraint


check find constraint vi-
olations among the set Check that the con-
constraint checks
of mapping rules be- straints still pass.
cause of changes to the
mapping?

Data Federator User Guide 337


8 Managing changes
Adding a constraint check

Adding a constraint check


To add a constraint check, see Defining constraints on a target table on
page 294.
When you add a constraint check, verify the following components.

Component What to verify How to verify it

Does the new constraint Check that the map-


mappings check make a mapping pings still pass the con-
fail? straint checks.

Modifying a constraint check


To modify a constraint check, see Defining constraints on a target table on
page 294.

When you modify a constraint check, verify the following components.

Component What to verify How to verify it

Does the constraint Check that the map-


mappings check make a mapping pings still pass the con-
fail? straint checks.

Modifying a domain table


To modify a domain table, see Adding a domain table to enumerate values
in a target column on page 55.

When you modify a domain table, verify the following components.

338 Data Federator User Guide


Managing changes
Deleting a domain table 8
Component What to verify How to verify it

Run a query on the


mapping, and check that
there are no blank val-
Does a mapping rule di- ues in the rows that ref-
rectly reference a value erence the column that
mappings
that you changed in the you changed in the do-
domain table? main table.

See Testing a mapping


rule on page 281.

Do constraint checks fail If the mapping are affect-


constraint checks because of changes to ed, check that the con-
the mappings? straint checks still pass.

Check the schema of


your lookup table to
make sure that the
columns still match the
domain table.
Does the lookup table
If they do not match,
reference columns that
lookup tables create a lookup table
have changed in the
with a schema that
domain table?
matches your domain
table.

See Referencing a do-


main table in a lookup
table on page 247.

Deleting a domain table


To delete a domain table, see Deleting a domain table on page 61.

Data Federator User Guide 339


8 Managing changes
Deleting a domain table

When you delete a domain table, verify the following components.

Component What to verify How to verify it

Run a query on the


mapping, and check that
there are no blank val-
Does a mapping rule di- ues in the rows that ref-
rectly reference a value erence the column in
mappings
in the deleted domain the deleted domain ta-
table? ble.

See Testing a mapping


rule on page 281.

Do constraint checks fail If the mapping are affect-


constraint checks because of changes to ed, check that the con-
the mappings? straint checks still pass.

Check the schema of


your lookup table to
make sure that the
columns do not refer-
ence columns in the
deleted domain table.
Does the lookup table
reference columns in If they do reference
lookup tables
the deleted domain ta- columns in the deleted
ble? domain table, add a do-
main table to replace
the one you deleted.

See Referencing a do-


main table in a lookup
table on page 247.

340 Data Federator User Guide


Managing changes
Modifying a lookup table 8
Modifying a lookup table
To modify a lookup table, see Mapping values between a datasource table
and a domain table on page 248.
When you modify a lookup table, verify the following components.

Component What to verify How to verify it

Run a query on the


mapping, and check that
Does a mapping rule there are no blank val-
use a value that you ues in the rows that ref-
mappings erence the column in
deleted or changed in
the lookup table? the lookup table.

See Testing a mapping


rule on page 281.

Do constraint checks fail If the mapping are affect-


constraint checks because of changes to ed, check that the con-
the mappings? straint checks still pass.

Deleting a lookup table


To delete a lookup table, see Deleting a table from a mapping rule on
page 286.

When you delete a lookup table, verify the following components.

Data Federator User Guide 341


8 Managing changes
Deleting a lookup table

Component What to verify How to verify it

Run a query on the


mapping, and check that
Does a mapping rule there are no blank val-
mappings use a value in the delet- ues in the rows that ref-
ed lookup table? erence the lookup table.

See Testing a mapping


rule on page 281.

Do constraint checks fail If the mapping are affect-


constraint checks because of changes to ed, check that the con-
the mappings? straint checks still pass.

342 Data Federator User Guide


Introduction to Data
Federator Query Server

9
9 Introduction to Data Federator Query Server
Data Federator Query Server overview

Data Federator Query Server overview


This chapter describes the architecture and the main administration functions
of Data Federator Query Server.

The Data Federator Query Server expands the standard database user
management and the SQL query language. This allows the query engine
and virtual target tables to provide additional functionality and higher
performance.

Data Federator Query Server architecture


The illustration below shows an overview of the main components of Data
Federator. They are, Data Federator Query Server, Data Federator Designer
and Data Federator Administrator.

344 Data Federator User Guide


Introduction to Data Federator Query Server
Data Federator Query Server architecture 9

How Data Federator Query Server accesses sources


of data

Real-time access to sources of data is divided into two steps: the connector
and the driver.

Element Description

The connector expands the function-


ality of the database driver to work
with Data Federator Query Server.

Connector The connector is an XML file (.wd file)


that defines the parameters by type
of database and contains metadata
about the data managed by Data
Federator Query Server.

The driver provides a common ac-


cess method for querying tools. The
driver is supplied with the database.

The driver is a file that defines the


Driver
access parameters to query the
database it supports.

Data Federator Query Server sup-


ports ODBC and JDBC drivers.

Data Federator Query Server sup-


ports many sources of data, for exam-
Database (source)
ple: Oracle, SQL Server, MySQL,
CSV files, XML files or web services.

Data Federator User Guide 345


9 Introduction to Data Federator Query Server
Data Federator Query Server architecture

The illustration below shows an example of how the Data Federator Query
Server relates to the sources of data.

346 Data Federator User Guide


Introduction to Data Federator Query Server
Data Federator Query Server architecture 9

Data experts and data administrators

The database administrator is the person who sets up the connection and
allows others to connect to the data base. This person is not necessarily the
data expert who controls the Data Federator Designer. For this reason, a
system of aliases facilitates access to the often highly-customized database
resources.

Once the connections are made, Data Federator provides an end-to-end


view via Data Federator Administrator and Data Federator Designer, from
the database resources to the target tables. These two web-based tools
respectively provide functionality for the data administrator and the data
expert.

Related Topics
• The Data Federator application on page 26

Key functions of Data Federator Administrator

The main functions performed by Data Federator Administrator on Data


Federator Query Server are as follows:
• Setting up and managing users and their roles
• Managing resources and database connections
• Monitoring query execution

Related Topics
• About user accounts, roles, and privileges on page 504
• Managing resources using Data Federator Administrator on page 483
• Data Federator Query Server overview on page 344
• Query execution overview on page 530

Data Federator User Guide 347


9 Introduction to Data Federator Query Server
Security recommendations

Security recommendations
Business Objects recommends that you use a firewall and use a standard
http protocol to protect and access the Data Federator Query Server.

348 Data Federator User Guide


Connecting to Data
Federator Query Server
using JDBC/ODBC drivers

10
10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
Connecting to Data Federator Query Server using JDBC

Connecting to Data Federator Query


Server using JDBC
This section describes how to connect your applications using JDBC so that
they can retrieve data from Data Federator Query Server.

Installing the JDBC driver with the Data Federator


installer

This procedure shows you what you need to install in order to use the JDBC
driver for Data Federator Query Server in your client application.

This procedure applies when you have the Data Federator or Data Federator
Drivers installer.
1. Use the Data Federator or Data Federator Drivers installer on the Data
Federator CD-ROM to install the JDBC driver.
See the Data Federator Installation Guide for installation details.

2. Add data-federator-installation-dir/JdbcDriver/lib/thindriv
er.jar to the classpath that your client application must search when
loading the Data Federator Query Server JDBC driver.
3. Use this as the class name of the JDBC driver that your client application
loads:
com.businessobjects.datafederator.jdbc.DataFederatorDriver

Note:
Data Federator remains compatible with previous versions. You can still
use the class name LeSelect.ThinDriver.ThinDriver, but it is recommended
that you update to the new name above.

4. Launch your client application.


For certain applications, you may need to launch the JVM with the
following option:
-Djava.endorsed.dirs=data-federator-installation-dir/Jdbc
Driver/lib

350 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
Connecting to Data Federator Query Server using JDBC 10
If your application does not allow you to set the java.endorsed.dirs
option, set a system CLASSPATH variable that includes all the .jar files
from the directory:
data-federator-installation-dir/JdbcDriver/lib

Installing the JDBC driver without the Data Federator


installer

If you do not have the Data Federator installer, but you can access the JDBC
files that come with Data Federator, then you can make a JDBC connection
as follows.
1. Retrieve thindriver.jar from the machine where Data Federator is
installed, from the directory data-federator-installation-dir/Jdbc
Driver/lib.
If:
• the parameter commProtocol=jacORB is used, you will also need
avalon-framework-4.1.5.jar, jacorb.jar and logkit-1.2.jar
• your client application uses JDK1.4, you will also needicu4j.jar
2. Copy these files to a directory of your choice (your-jdbc-driver-direc
tory).
3. Add your-jdbc-driver-directory/thindriver.jar to the classpath
that your client application must search when loading the Data Federator
Query Server JDBC driver.
4. Launch your client application.
For certain applications, you may need to launch the JVM with the
following option:

-Djava.endorsed.dirs=your-jdbc-driver-directory

If your application does not allow you to set the java.endorsed.dirs


option, set a system CLASSPATH variable that includes all the above .jar
files.

Data Federator User Guide 351


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
Connecting to Data Federator Query Server using JDBC

Connecting to the server using JDBC

This procedure shows how to establish a connection between your application


and Data Federator Query Server.
1. Install the JDBC driver for Data Federator Query Server.
2. In the client application that you are using to connect, enter the connection
URL:

jdbc:datafederator://host[:port][/[catalog]][[;param-
name=value]*]

Note:
Data Federator remains compatible with previous versions. You can still
use the url prefix jdbc:leselect:, but it is recommended that you update
to the new prefix above.

The catalog must be the same as the catalog name you used to deploy
one of your projects.

For example, if you named your catalog "OP":

jdbc:datafederator://localhost/OP

The parameters in the JDBC connection URL let you configure the
connection.
Note:
The classpath that your application uses must include: your-jdbc-
driver-directory/thindriver.jar

Your client application establishes a connection to Data Federator Query


Server.

Related Topics
• Installing the JDBC driver with the Data Federator installer on page 350
• JDBC URL syntax on page 358
• Parameters in the JDBC connection URL on page 361
• Deploying a version of a project on page 324
• Example Java code for connecting to Data Federator Query Server using
JDBC on page 353

352 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
Connecting to Data Federator Query Server using JDBC 10

Example Java code for connecting to Data Federator


Query Server using JDBC

The following code block suggests how to use the Data Federator driver to
connect to Data Federator Query Server and execute an SQL query.

The statement shown in this block is select distinct(CATALOG) from


leselect/system/schemas where schema='targetSchema'. This statement
retrieves all of the distinct catalogs from the system table that lists all of the
schemas of target tables that you have deployed on Query Server.

You can replace the SQL statement in this example with any statement that
is supported by Data Federator. For the names of the stored procedures that
you can call using this kind of code, see the list of stored procedures.

Example: Code block showing how to connect to Data Federator Query


Server through JDBC

/* loads the driver for Data Federator Query Server */


Class.forName( "com.businessobjects.datafederator.jdbc.DataFed
eratorDriver" );

/* sets up the url and parameters to pass in the url */


String strUrl = "jdbc:datafederator://localhost";
String strUser = "sysadmin";
String strPass = "sysadmin";

/* creates a representation of a connection */


Connection conn =
DriverManager.getConnection(
strUrl, strUser, strPass );

/* creates a representation of a statement */


Statement statement = conn.createStatement();

/* sets up the schema name */


String strSchema = "targetSchema";

/* writes the text of the statement */


String strSelectStatement =
"select distinct(CATALOG) "
+ "from /leselect/system/schemas "
+ "where SCHEMA='"
+ strSchema

Data Federator User Guide 353


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
Connecting to Data Federator Query Server using ODBC

+ "'";

/* executes the query and retrieves the result set */


ResultSet rs =
statement.executeQuery( strSelectStatement );

/* iterates through the rows */


while( rs.next() )
{
/* prints the value of one column in the row */
System.out.println( rs.getString( "CATALOG" ) );
}

/* closes the connections and frees resources */


rs.close();
statement.close();
conn.close();

Related Topics
• JDBC URL syntax on page 358
• Parameters in the JDBC connection URL on page 361
• System table reference on page 728
• List of stored procedures on page 744

Connecting to Data Federator Query


Server using ODBC
This section describes how to connect your applications using ODBC so that
they can retrieve data from Data Federator Query Server.

Installing the ODBC driver for Data Federator


(Windows only)

To connect to Data Federator Query Server via ODBC, you must use the
OpenAccess ODBC to JDBC Bridge.

The Data Federator Drivers installer installs and configures the OpenAccess
ODBC to JDBC Bridge.

354 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
Connecting to Data Federator Query Server using ODBC 10
• Use the Data Federator Drivers installer on the Data Federator CD-ROM
to install the OpenAccess ODBC to JDBC Bridge.
See the Data Federator Installation Guide for installation details.

Connecting to the server using ODBC

This procedure shows how to establish a connection between your application


and Data Federator Query Server.
1. Install the OpenAccess ODBC to JDBC Bridge for Data Federator Query
Server.
2. Open your operating system's "ODBC Data Source Administrator".
To open the "ODBC Data Source Administrator" on a standard installation
of Windows, click Start, then Programs, then Administrative Tools,
then click Data Sources (ODBC).

3. Add a DSN entry of type OpenAccess ODBC to JDBC Bridge, and


configure it as follows.

For the parameter... Enter...

DSN Name a name of your choice

com.businessobjects.datafedera
Driver Class
tor.jdbc.DataFederatorDriver

Data Federator User Guide 355


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
Connecting to Data Federator Query Server using ODBC

For the parameter... Enter...

the following JDBC URL

jdbc:datafedera
tor://<host>[:<port>][/[<cata
log>]][[;param-name=value]*]
• The <catalog> must be the
same as the catalog name you
used to deploy one of your
URL projects.
• For example, if you named your
catalog "OP":
• jdbc:datafederator://local
host/OP
• Add parameters in the JDBC
connection URL as required.

• Click Test to test the connection to Data Federator Query Server.

4. In your ODBC client application, use the DSN name that you created in
your "ODBC Data Source Administrator".
Your client application can establish an ODBC connection to Data
Federator Query Server.

Related Topics
• Installing the ODBC driver for Data Federator (Windows only) on page 354
• Parameters in the JDBC connection URL on page 361
• Using ODBC when your application already uses another JVM on page 357
• Deploying a version of a project on page 324
• Parameters in the JDBC connection URL on page 361

356 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
Accessing data 10

Using ODBC when your application already uses


another JVM
1. Edit the OpenAccess configuration file.
You can find the OpenAccess configuration file in data-federator-
drivers-install-dir\\OaJdbcBridge\bin\iwinnt\openrda.ini.

2. Copy the value of the CLASSPATH property to your application's classpath.


For example, when you install Data Federator Drivers, the value of the
CLASSPATH property may be:

CLASSPATH=C:\Program Files\Business Objects\BusinessObjects


Data Federator 12 Drivers\OaJdbcBridge\jdbc\thindriv
er.jar;C:\Program Files\BusinessObjects Data Federator XI
3.0\OaJdbcBridge\oajava\oasql.jar

You should add everything after the equal sign (=) to the classpath that
your application uses.

3. Change the value of the JVM_DLL_NAME property to the "JVM" path that
your application uses.
For example, when you install Data Federator, the value of the
JVM_DLL_NAME property may be:

JVM_DLL_NAME=C:\Program Files\Business Objects\BusinessOb


jects Data Federator 12 Drivers\jre\bin\client\jvm.dll

You should change everything after the equal sign (=) to the path to the
"JVM" that your application uses.

Accessing data
You can access data in the Data Federator using SQL statements.

Your queries include the names of the catalog, schema, and table, unless
the catalog or schema names are specified by default.

The structure of a table name in a query is:

"[catalog-name]"."[schema-name]"."[table-name]"

Data Federator User Guide 357


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

For details on catalog, schema and table naming, see Data Federator SQL
grammar on page 713.

Example: To query a table


If you named your catalog "/OP", and your schema is "targetSchema", you
could make the query:

SELECT * FROM "/OP"."targetSchema"."clients"

If your default catalog is "/OP", and your default schema is "targetSchema",


you could make the query:

SELECT * FROM clients

Related Topics
• Properties of user accounts on page 511

JDBC URL syntax


The syntax of a JDBC URL in Data Federator Query Server is as follows.
Figure 10-1: JDBC URL for connecting to Data Federator Query Server directly
jdbc:datafederator://[ host[ :port ] ][ /catalog ][ ;param-name=value ]*

The table below describes the parts of the URL syntax shown above, and
applies to all following examples:

URL part Description

the prefix required to connect to Data


Federator Query Server
Note:
jdbc:datafederator:// Data Federator remains compatible
with previous versions. You can still
use the url prefix jdbc:leselect:, but it
is recommended that you update to
the new prefix above.

358 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
URL part Description

the host where Data Federator Query


Server is running
Note:
host
• This is an optional part.
• If you do not name a host, the
URL points to localhost.

the port on which Data Federator


Query Server is running

Note:
port
• This is an optional part.
• If you do not name a port, the
URL points to 3055.

the name of the catalog to which you


connect

Note:
catalog
• This is an optional part
• If you do not name a catalog, the
URL points to /OP.

the JDBC parameters of the connec-


tion

Note:

param-name=value • This is an optional part.


• JDBC parameters are case-sensi-
tive.
• Multiple JDBC parameters must
be separated by semi-colons.

Data Federator User Guide 359


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

To connect to Query Server with fault tolerance, use the alternateServers


URL parameter, as follows:
Figure 10-2: JDBC URL for connecting to Data Federator Query Server with fault tolerance
jdbc:datafederator://[ host[ :port ] ][ /catalog ][
;alternateServers=host1[ :port1 ][ &host2:port2 ]* ][ ;param-name=value
]*
You can also use , (comma) to separate the entries in the list of hosts to use
for fault tolerance, as follows :
Figure 10-3: Alternative JDBC URL for connecting to Data Federator Query Server with
fault tolerance
jdbc:datafederator://[ host[ :port ] ][ /catalog ][
;alternateServers=host1[ :port1 ][ ,host2:port2 ]* ][ ;param-name=value ]*
Note:
If you use OpenAccess, you must use the & (ampersand) to separate entries
in the list of hosts.

In the above syntax, each host entry can be either Data Federator Query
Server or Data Federator Connection Dispatcher.

Use the JDBC URL to define values for the parameters. Any values that you
set in the JDBC URL will override, for a single connection, the default values
and the values for your client.

Example:
The following example URL causes Data Federator to
• apply fault tolerance on host1, host2 and host3 is:
• apply connection failover if the host mainhost is down

jdbc:datafederator://mainhost:alternate
Servers=host1,host2,host3;user=jill;

Related Topics
• Configuring fault tolerance for Data Federator on page 601
• Parameters in the JDBC connection URL on page 361
• Parameters in the JDBC connection URL on page 361

360 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10

Parameters in the JDBC connection URL

This section lists the parameters you can add in the JDBC connection URL.
Note:
All parameters are case-sensitive.

Finding the default values of parameters


The default values for all the system and query parameters are stored in the
file thindriver.jar. You can access these files by using an archiving tool
to extract the contents of thindriver.jar.

When you extract the files from thindriver.jar, you will find the following
two files, which contain the default values of all the parameters.
• thindriver.jar/properties/thin_params.properties
• thindriver.jar/properties/thin_site_params.properties

Data Federator User Guide 361


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

Parameter Description
user
user=[username]

Specifies the name of the


user connecting to Data
Federator Query Server.

password
password=[password]

Specifies the password of


the user connecting to
Data Federator Query
Server.

catalog
catalog=[catalog-
name]

Specifies the default cata-


log for the current user. If
present, the catalog ele-
ment of the URL overrides
the value of the catalog
parameter.

schema
schema=[schema-name]

Specifies the default


schema for the current
user.

alternateServers

362 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
Parameter Description

alternate
Servers=host1[:port1]

[,host2[:port2][,...]]

alternate
Servers=host1[:port1]

[&host2[:port2][&...]]

Used for fault tolerance.


Specifies the list of alter-
nate servers to try upon
failed connections.

alternateServersFile
alternateServers
File=true

alternateServers
File=false

Used for fault tolerance.


Specifies whether to read
a file to find the list of alter-
nate servers.

This file is at US
ER_HOME/.datafedera
tor/alter
nate_servers.list

checkVersion

Data Federator User Guide 363


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

Parameter Description

checkVer
sion=[STRICT|MA
JOR|NONE]

Setting this parameter al-


lows the verification of
compatibility between the
JDBC driver and Data
Federator Query Server.

STRICT means that an


exception is thrown when
a difference is detected
between the version of the
JDBC driver and the ver-
sion of Data Federator
Query Server. No connec-
tion can be established in
such a case.

MAJOR means that an


exception is only thrown
when a difference is de-
tected between the major
version of the JDBC driver
and the major version of
Data Federator Query
Server. In such a case, no
connection can be estab-
lished. However, a connec-
tion can still be estab-
lished when the minor
versions are different.

NONE means that any


difference between Data
Federator Query Server
and the JDBC driver ver-
sions is allowed. A connec-
tion is always established,

364 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
Parameter Description
but unexpected effects
can occur when the JDBC
driver and Data Federator
Query Server are incom-
patible.

The default value of the


checkVersion parameter
is STRICT.

commProtocol
commProto
col=[JacORB|JDKORB]

Allows the user to choose


the implementation of the
communication protocol
between the Data Federa-
tor JDBC driver and Data
Federator Query Server.

This parameter can take


one of the following val-
ues: JacORB or JDKORB.

JacORB means that the


communication protocol is
based on the Jacorb imple-
mentation of the CORBA
specification.

JDKORB means that the


communication protocol is
based on the internal JDK
implementation of the
CORBA specification.

The default value is JDKO


RB.

commWindow

Data Federator User Guide 365


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

Parameter Description

commWindow=[integer]

Allows the user to specify


the maximum number of
chunks of rowsAtATime
size that can be fetched in
advance, when the param-
eter dataFetch
er=PREFETCH.

The default value is 2.

dataFetcher

366 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
Parameter Description

dataFetch
er=[PREFETCH|ONDE
MAND]

Allows the user to choose


the data fetching type
used by the Data Federa-
tor JDBC driver to transfer
query result set data from
Data Federator Query
Server.

This parameter can take


one of the following val-
ues: PREFETCH or ON
DEMAND. ONDEMAND
means that data is fetched
in chunks of rowsAtATime
size. PREFETCH means
that multiple chunks of
rowsAtATime size can be
fetched in advance. The
user will be able to re-
trieve data after the first
chunk of rowsAtATime
size is fetched.

The default value is


PREFETCH.

enforceMetadataColumnSize

Data Federator User Guide 367


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

Parameter Description

enforceMetadataColumn
Size=[true|false]

When you set this param-


eter to true, you can limit
the string size of VAR
CHAR values from VAR
CHAR columns from
metadata result sets to the
value of the maxMetada
ColumnSize parameter.

When enforceMetadata
ColumnSize parameter is
set to true, maxMetadata
ColumnSize is also used
for the following metadata
information: maximum
catalog name length,
maximum schema name
length, maximum table
name length, maximum
column name length,
maximum user name
length, maximum charac-
ter literal length and maxi-
mum binary literal length.
The maximum query
statement length is de-
fined as (1000 *
maxMetadataColumn
Size).

This parameter can be


TRUE or FALSE.

The default value is


FALSE.

maxMetadataColumnSize

368 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
Parameter Description

maxMetadataColumn
Size=[integer]

Allows the user to specify


the maximum size for
columns of type VAR
CHAR in metadata result
sets.

This parameter can take


an integer value that
should be at least equal to
maximum(maxDecimalPre
cision + 2, 29) and not
greater than maxString
Size. The value 29 repre-
sents the display size of a
TIMESTAMP value.

The default value is 255.

When retrieving VAR


CHAR values, only the
parameter maxStringSize
is used to truncate and
eventually generate a
truncation warning, even
when dealing with metada-
ta result sets.

When enforceMetadata
ColumnSize parameter is
set to true, maxMetadata
ColumnSize is also used
for the following metadata
information: maximum
catalog name length,
maximum schema name
length, maximum table
name length, maximum

Data Federator User Guide 369


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

Parameter Description
column name length,
maximum user name
length, maximum charac-
ter literal length and maxi-
mum binary literal length.
The maximum query
statement length is de-
fined as (1000 *
maxMetadataColumn
Size).

enforceStringSize
enforceString
Size=[true|false]

When you set this param-


eter to true, the string size
of VARCHAR data is
checked and limited to the
maximum string size de-
fined in the maxStringSize
parameter, and the strings
are truncated to fit this. A
warning is issued whenev-
er a string truncation is
applied.

This parameter can be


TRUE or FALSE.

The default value is


FALSE.

maxStringSize

370 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
Parameter Description

maxStringSize=[inte
ger]

Allows the user to specify


the maximum size of
string values from
columns of type VAR
CHAR in result sets of
queries.

This parameter can take


an integer value that
should be at least equal to
maximum(maxDecimalPre
cision + 2, maxMetaData
ColumnSize, 29). The val-
ue 29 represents the dis-
play size of a TIMES
TAMP value.

The default value is 9000.

If the enforceStringSize
parameter is set to true,
then when a VARCHAR
value has a size greater
than maxStringSize, the
value is truncated and a
warning message is set.

enforceServerMetadataDecimal-
Size

Data Federator User Guide 371


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

Parameter Description

enforceServerMeta
dataDecimal
Size=[none|maxS
cale|fixedScale]

Allows the user to specify


whether the precision and
scale reported at metada-
ta level by Data Federator
Query Server for DECI
MAL data should be
checked and enforced
over the DECIMAL data
obtained from Data Feder-
ator Query Server. This
parameter is used in con-
junction with enforce
MaxDecimalSize parame-
ter.

This parameter can be


NONE, MAXSCALE or
FIXEDSCALE.

The default value is


NONE.

enforceMaxDecimalSize

372 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
Parameter Description

enforceMaxDecimal
Size=[none|maxS
cale|fixedScale]

Allows the user to specify


whether the precision and
scale of DECIMAL data
should be limited to the
user-defined maximum
precision maxDecimalPre
cision or maximum scale
maxDecimalScale.

This parameter can be


NONE, MAXSCALE or
FIXEDSCALE.

The default value is


NONE.

The enforceMaxDecimal
Size parameter is used in
conjuction with enforce
ServerMetadataDecimal
Size, primarily to configure
the precision and scale of
values returned by Data
Federator Query Server.

maxDecimalPrecision

Data Federator User Guide 373


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

Parameter Description

maxDecimalPreci
sion=[>=20]

Allows the user to specify


the maximum precision of
DECIMAL values.

This parameter can take


an integer value greater
than or equal to 20. When
ODBCMode = TRUE it will
be equal to or less than
40.

The default value is 27.

The display size of a


DECIMAL value is equal
to (maxDecimalPrecision
+ 2).

maxDecimalScale
maxDeci
malScale=[0<=maxDeci
malScale<=maxDecimal
Precision]

Allows the user to specify


the maximum scale of
DECIMAL values.

This parameter can take


an integer value between
0 and maxDecimalPreci
sion.

The default value is 6.

ODBCMode

374 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
Parameter Description

ODBCMode=[true|false]

When you set this param-


eter to true, the following
parameters are set by de-
fault:
• enforceMetadata
ColumnSize = TRUE
• enforceStringSize =
TRUE
• enforceMaxDecimal
Size = MAXSCALE
• enforceMaxDecimal
Size = MAXSCALE

This parameter can be


TRUE or FALSE.

The default value is


FALSE.

rowsAtATime

Data Federator User Guide 375


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax

Parameter Description

rowsAtATime=[integer]

Specifies the data fetch


size. With this parameter,
you give a hint to Data
Federator Query Server
on the number of rows to
be sent in one data trans-
fer to the Data Federator
JDBC driver.

By default the parameter


is set to -1 which activates
an algorithm that dynami-
cally and automatically
adjusts the size of the
fetched data. The algo-
rithm examines the memo-
ry available on the server
and the query execution
speed. The value gets ad-
justed throughout query
execution in order to as-
sure the best performance
for all clients.

It is recommended to not
force this to any value as
its dynamic adjustement
helps to prevent client
starving. Clients may be
found in situations where
they cannot retrieve data
for a long time if another
set of clients has forced
the fetch size to a large
value. The data prepara-
tion for fetching may con-

376 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC URL syntax 10
Parameter Description
sume a lot of fetching
memory on the server,
leaving no memory to oth-
er clients.

On the other hand, if you


need a client to perform
faster than others, this pa-
rameter may help. In all
cases the server may limit
this value to a maximum,
which is decided depend-
ing on the available re-
sources for query execu-
tion.

The default value is -1.

autoReconnect
autoRecon
nect=[YES|NO]

Specifies if the client appli-


cation respawns connec-
tions upon error. When a
connection respawns auto-
matically, your session
parameters will be reset
to their default values. To
prevent session parame-
ters from being reset when
there is an error in the
connection, leave this val-
ue as NO.

The default value is NO.

Related Topics
• Configuring the precision and scale of DECIMAL values returned from
Data Federator Query Server on page 535

Data Federator User Guide 377


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC and ODBC Limitations

JDBC and ODBC Limitations

JDBC and ODBC Limitations

JDBC limitations

The following JDBC methods are not supported by the Data Federator JDBC
driver.
• Connection methods:
• setAutoCommit (no effect)
• setHoldability (no effect)
• setReadOnly (Data Federator Query Server is always read-only)
• setSavePoint, releaseSavePoint
• setTransactionIsolation (no effect)
• setTypeMap (no effect)
• commit, rollback (no effect)

• Statement methods:
• setCursorName (no effect)
• setFetchDirection (only supported when the parameter direction is
ResultSet.FETCH_FORWARD)

• setFetchSize (no effect)


• setMaxFieldSize (only supported when the parameter max is 0)
• setQueryTimeOut (only supported when the parameter seconds is 0)
• setEscapeProcessing (no effect)
• addBatch, clearBatch, executeBatch
• executeUpdate

• PreparedStatement methods:

378 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
JDBC and ODBC Limitations 10
• setArray/AsciiStream/BinaryStream/Blob/Bytes/
CharacterStream/Clob/Long/Ref/UnicodeStream/URL
• getParameterMetaData

• ResultSet methods:
• setFetchDirection (only supported when the parameter direction is
ResultSet.FETCH_FORWARD)

• setFetchSize (no effect)


• isAfterLast/BeforeFirst/First/Last
• absolute, first, last, beforeFirst, afterLast
• cancelRowUpdates
• deleteRow, insertRow
• getArray/AsciiStream/BinaryStream/Blob/Bytes/
CharacterStream/Clob/CursorName/Ref/UnicodeStream/URL
• moveToCurrentRow, moveToInsertRow
• refreshRow
• relative
• rowDeleted/Inserted/Updated
• update methods

ODBC limitations

The following ODBC functions are limited on or not supported by the


OpenAccess ODBC to JDBC Bridge for Data Federator.
• SQLAllocHandle (SQL_HANDLE_DESC not supported)
• SQLBindCol (Bookmark column not supported)
• SQLExtendedFetch (Accepts only SQL_FETCH_NEXT as fFetchType
value; Fetches only one row)
• SQLFreeHandle (SQL_HANDLE_DESC not supported)
• SQLBulkOperations

Data Federator User Guide 379


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
SQL Constraints

• SQLColumnPrivileges
• SQLColumnPrivilegesW
• SQLCopyDesc
• SQLFetchScroll
• SQLGetDescField
• SQLGetDescFieldW
• SQLGetDescRec
• SQLGetDescRecW
• SQLGetDiagField
• SQLGetDiagFieldW
• SQLGetDiagRec
• SQLGetDiagRecW
• SQLNativeSQL
• SQLNativeSQLW
• SQLParamOptions
• SQLSetDescField
• SQLSetDescRec
• SQLSetDescRecW
• SQLSetPos
• SQLSetScrollOptions
• SQLTablePrivileges
• SQLTablePrivilegesW

SQL Constraints
This section lists the SQL syntax that is accepted by Data Federator Query
Server.

380 Data Federator User Guide


Connecting to Data Federator Query Server using JDBC/ODBC drivers
SQL Constraints 10
Related Topics
• SQL syntax overview on page 688

Data Federator User Guide 381


10 Connecting to Data Federator Query Server using JDBC/ODBC drivers
SQL Constraints

382 Data Federator User Guide


Using Data Federator
Administrator

11
11 Using Data Federator Administrator
Data Federator Administrator overview

Data Federator Administrator overview


This chapter covers the procedures to start Data Federator Administrator
and to perform basic queries. It also describes its user interface. Data
Federator Administrator is a web based client application that serves as a
window onto Data Federator Query Server.

Before you begin, ensure you or your system administrator has followed the
steps to install and configure Data Federator in the Data Federator Installation
Guide.

Related Topics
• Data Federator Query Server overview on page 344

Starting Data Federator Administrator


1. Click Start > Programs > BusinessObjects Data Federator XI Release
3 > Data Federator Query Server Administrator.
2. Enter your user name and password in the User and Password text
boxes, then click OK.
A status message appears at the top right side of the screen: Connected
as user 'user_name'.

To end your Data Federator


Administrator session
• Click the link Logout, that appears at the top right side of the screen.
Your session ends and the login screen appears.

Server configuration
The following section details steps to consider to configure your Data
Federator Query Server deployment.

384 Data Federator User Guide


Using Data Federator Administrator
Exploring the user interface 11
Exploring the user interface
If you have the rights, you may proceed to perform any of the following
actions:
• Explore objects in the Objects tab.
• Enter queries in the My Query Tool tab.
• Monitor queries and manage Data Federator Query Server in the
Administration tab.

Related Topics
• Objects tab on page 385
• Managing user accounts with SQL statements on page 516
• Administration tab on page 387

Objects tab

You use this tab to navigate in Data Federator Query Server objects. At the
highest level, a list of functions appears. When you navigate within the
objects, the following tabs appear:
• Info tab, displays information about the object
• Content tab, displays the contents for that object in the form of a table

Click Refresh to refresh the table content.

Data Federator User Guide 385


11 Using Data Federator Administrator
Exploring the user interface

My Query Tool tab

You use this tab most frequently to execute SQL queries and to perform
administrative functions on Data Federator Query Server.

When you first login, the My Query Tool tab appears by default.

386 Data Federator User Guide


Using Data Federator Administrator
Exploring the user interface 11

Related Topics
• Managing queries with Data Federator Administrator on page 397
• Key functions of Data Federator Administrator on page 347

Administration tab

You use this tab to monitor the queries that are running and display a history
of the queries that have executed already.

Viewing the running queries

Queries that are running are displayed in the Query Running tab. If you
have no running queries, nothing displays here.

Data Federator User Guide 387


11 Using Data Federator Administrator
Exploring the user interface

Viewing the query history

The Query History tab displays the history of the last ten queries.

388 Data Federator User Guide


Using Data Federator Administrator
Exploring the user interface 11

Each query record contains the following information:


• Status: the status of the query
• SQL: the SQL statement executed
• User: the user account that executed the query
• Exception: any exceptions that occurred
• Subqueries: the queries executed by the connectors

The Server Status menu item

The Server Status menu item displays a summary of the status of Data
Federator Query Server.

Data Federator User Guide 389


11 Using Data Federator Administrator
Exploring the user interface

The Connector Settings menu item

The Connector Settings menu item lets you manage resources for
connectors to data sources.

390 Data Federator User Guide


Using Data Federator Administrator
Exploring the user interface 11

The User Rights menu item

The User Rights menu item lets you manage users, roles and permissions.

Data Federator User Guide 391


11 Using Data Federator Administrator
Exploring the user interface

The Configuration menu item

The System Parameters and Session Parameters tabs under the


Configuration menu item let you manage system and session parameters.

392 Data Federator User Guide


Using Data Federator Administrator
Exploring the user interface 11

The Statistics menu item

The Statistics menu item lets you view updated statistics of queries that
you run on Data Federator Query Server.

Data Federator User Guide 393


11 Using Data Federator Administrator
Managing statistics with Data Federator Administrator

Managing statistics with Data Federator


Administrator
Data Federator Administrator provides a window where you manage statistics
of tables and columns. You can use this window to view or configure statistics
in order to optimize your queries.

Statistics are the estimations of the amount data in a column or table. Data
Federator can use statistics to optimize the queries it runs. By default, Data
Federator calculates statistics by itself. If you want, you can override the
values that Data Federator calculates.

394 Data Federator User Guide


Using Data Federator Administrator
Managing statistics with Data Federator Administrator 11

Using the Statistics tab to refresh statistics


automatically

You can use the Global Refresh of Statistics pane to refresh the statistics
of your tables automatically.
1. Select the option that controls how you want to perform the global refresh.
For details, see List of options for the Global Refresh of Statistics pane
on page 396.

2. Use the Statistics list pane to select the list of tables for which you want
statictics to be refreshed automatically.
3. Click OK.

Selecting the tables for which you want to display


statistics

You can use the Statistics List pane to select the list of tables for which
you want to display statistics.
1. Select List only statistics in.
2. Select values in the Catalog, Schema and Table boxes.
The statistics will be displayed for the tables that you select.

3. Click Refresh to update the statistics.

Recording statistics that Query Server recently


requested

You can use the Statistics List pane to record the tables and columns for
which statistics were recently requested by Query Server.
1. Click List only cardinalities recently requested.
2. Set the session parameter Leselect.core.statistics.recorder.enabled to
true.

Data Federator User Guide 395


11 Using Data Federator Administrator
Managing statistics with Data Federator Administrator

When queries are executed, the list of tables with their statistics is
displayed automatically.

List of options for the Global Refresh of Statistics


pane

Option Description

Only columns
computes the number of distinct values for
each column

Only tables
computes the number of distinct values in
each table

All tables and


columns
computes the number of distinct values for
each column and the number of rows in each
table

Excluding when
value is overridden
the statistics will not be computed if you en-
by user tered a value during the definition of the
datasource

Including when val-


ue is overridden by
the statistics will be computed and will over-
user write the the value you entered during the
definition of the datasource

Related Topics
• Defining the schema of a datasource on page 204

396 Data Federator User Guide


Using Data Federator Administrator
Managing queries with Data Federator Administrator 11
Managing queries with Data Federator
Administrator
You can use Data Federator Administrator to run queries and keep track of
queries you have run in the past.
1. Login to Data Federator Administrator.
2. Click the My Query Tool tab.
3. Enter an SQL-syntax query.
4. Set the options as follows.
• If you want to limit the results to be displayed in the Query Result,
enter a number in the text box for the Maximum rows to display.
The default value is 5.
• If you want to fetch more rows than are displayed (for example, to see
the activity of the query in the log files), select Yes from the drop down
list for Fetch all rows option.

Select No from the drop down list for Fetch all rows option if you want
to only fetch the number of rows that are displayed.

5. Click Execute to run the query.

Related Topics
• Starting Data Federator Administrator on page 384
• Data Federator Query Server query language on page 688
• Query execution overview on page 530

Executing SQL queries using the My Query Tool tab


1. Enter the SQL query in the Query Editor panel.
Note:
Select objects from the Catalog, Schema, Table and Column columns
of the table in the "Objects Browser", if required. Note also that the "Query
Editor" text area can be shrunk or enlarged by placing your cursor over
the grey bar at the bottom of the text area and moving it up or down
accordingly.

Data Federator User Guide 397


11 Using Data Federator Administrator
Managing queries with Data Federator Administrator

• If you want to limit the results to be displayed in the Query Results


panel, enter a number in the text box for the Maximum rows to
display. The default value is 5.
• If you want to fetch more rows than are displayed (for example, to see
the activity of the query in the log files), select Yes from the drop down
list for Fetch all rows option.

Select No from the drop down list for Fetch all rows option if you want
to only fetch the number of rows that are displayed.

2. Click Run Query to execute the SQL query.


Note:
Select the text of one query with your cursor and click Run Selected
Query to execute just that selected query.
The query is run and the results displayed in the Query Results panel.

398 Data Federator User Guide


Configuring connectors to
sources of data

12
12 Configuring connectors to sources of data
About connectors in Data Federator

About connectors in Data Federator


In Data Federator, configuring a connector means installing drivers or
middleware, and then setting parameters so that Data Federator can connect
to a source of data.

In general, Data Federator connects to sources of data in one of two ways.


• JDBC

For most sources of data that support JDBC, you just copy the JDBC
driver to a directory where Data Federator can find it, and there is nothing
more to configure.
• proprietary middleware

For sources of data that do not support JDBC, you must install the
vendor's middleware, and point Data Federator to the middleware. In
most cases, you have already installed the middleware, and you just need
to tell Data Federator where to find it.

Configuring Access connectors

Configuring Access connectors

In order to configure a connector for Access, you must install an ODBC driver
and create an entry in your operating system's ODBC data source
administrator.
1. Install the ODBC driver for Access.
2. Open your operating system's "ODBC Data Source Administrator".
To open the "ODBC Data Source Administrator" on a standard installation
of Windows, click Start > Programs > Administrative Tools > Data
Sources (ODBC).

3. Create a DSN (Data Source Name entry) to point to your database.


Please refer to the vendor documentation for details on this configuration
step.

400 Data Federator User Guide


Configuring connectors to sources of data
Configuring DB2 connectors 12
Provide the following information to name users of Data Federator
Designer who want to create a datasource for Access.

Parameter Description
Data Source The name that you defined in your operating system's data
Name source manager, in the field Data Source Name.

Configuring DB2 connectors

Configuring DB2 connectors

In order to configure a connector for DB2, you must install JDBC drivers.
These drivers are usually available from the DB2 website.
1. Download the JDBC driver for DB2.
You get a driver in the form of a .jar file or several .jar files.

Use the following link to download the IBM DB2 JDBC Universal Driver.
The product is called IBM Cloudscape.

To complete this download, you must register on the IBM website.


Registration is free.

After you install IBM Cloudscape, you can find the driver file in ibm-
cloudscape-install-directory/lib/db2jcc.jar. The file db2jcc.jar
is the driver you can use for DB2.
http://www14.software.ibm.com/webapp/download/
2. Copy the driver .jar files to data-federator-install-dir/LeSe
lect/drivers

This directory is the default directory where Data Federator looks for
JDBC drivers. If you want to put the drivers in a different directory, you
must enter this directory name in the corresponding resource.

When Data Federator starts, it loads your JDBC drivers, and it can access
the corresponding JDBC data source.

Data Federator User Guide 401


12 Configuring connectors to sources of data
Configuring Informix connectors

Related Topics
• Pointing a resource to an existing JDBC driver on page 460

Configuring Informix connectors

Supported versions of Informix

This version of Data Federator supports Informix XPS.

The middleware is the IBM Informix ODBC Driver.

Configuring Informix connectors

In order to configure a connector for Informix, you must install an ODBC


driver and create an entry in your operating system's ODBC data source
administrator.
1. Install the ODBC driver for Informix.
2. Open your operating system's "ODBC Data Source Administrator".
To open the "ODBC Data Source Administrator" on a standard installation
of Windows, click Start > Programs > Administrative Tools > Data
Sources (ODBC).

3. Create a DSN (Data Source Name entry) to point to your database.


Please refer to the vendor documentation for details on this configuration
step.

Provide the following information to name users of Data Federator


Designer who want to create a datasource for Informix.

402 Data Federator User Guide


Configuring connectors to sources of data
Configuring Informix connectors 12
Parameter Description
Data Source The name that you defined in your operating system's data
Name source manager, in the field Data Source Name.

List of Informix resource properties

The table below lists the properties that you can configure in Informix
resources.

Data Federator User Guide 403


12 Configuring connectors to sources of data
Configuring Informix connectors

Type Parameter Description


BOOLEAN addCatalog
Set to True if you want to see the
catalog as a prefix for table names.

BOOLEAN addSchema
Set to True if you want to see the
schema as a prefix for table names.

Separated list allowTableType


Lists table types to take into consid-
of values (se
mi-colon)
eration metadata that is retrieved by
the underlying database.

Special case: if this attribute is empty


(' '), all table types are allowed.

example:

'TABLE;SYSTEM TABLE;VIEW'

one of {config authentication


configuredIdentity: authentication in
uredIdentity, Mode
callerImperson
the database is done using the value
ation, princi
of the parameters username and
palMapping} password.

callerImpersonation: authentication
in the database is done using the
same credential as the one used to
connect to the Query Server.

principalMapping: authentication in
the database is done using a map-
ping from Data Federator user to the
user of the database. In this case,
the parameter loginDomain should
be set to a registered login domain.

404 Data Federator User Guide


Configuring connectors to sources of data
Configuring Informix connectors 12
Type Parameter Description
Mapping capabilities
Defines what the data source sup-
ports in terms of relational operators.
It lists all capabilities supported by
the database.

Depending on the supported relation-


al operators, Data Federator man-
ages the queries differently. For ex-
ample if you specify outer
join=false, that tells Data Federator
Query Server to execute this operator
within Data Federator Query Server
engine.

An example is: isjdbc=true;outer


join=false;rightouterjoin=true.

The Data Federator documentation


has a full list of capabilities.

Default value defaultFetchSize


This parameter gives the driver a hint
as to the number of rows that should
be fetched from the database when
more rows are needed.

If the value specified is negative, then


the hint is ignored.

The default value is -1.

BOOLEAN ignoreKeys
Set to True if you do not want the
connector to query the data source
to get keys and foreign keys metada-
ta.

BOOLEAN isPasswordEn
crypted
Set to True if the password is encrypt-
ed. The password is defined by the
password parameter.

Data Federator User Guide 405


12 Configuring connectors to sources of data
Configuring Informix connectors

Type Parameter Description


INTEGER maxConnection
IdleTime
The maximum time an idle connec-
tion is kept in the pool of connections.
Unit is milliseconds.

The default is 60000 ms (60 s).

0 means no limit.

INTEGER nbPreparedState
mentsPerQuery
Maximum number of prepared state-
ments in the query pool.

pre-defined networkLayer
Specifies the network layer of the
value
database that you want to connect
to. When you create a resource,
choose the value that corresponds
to the database to which you are
connecting.

For Informix, this should be:


• Informix CLI

STRING password
Defines the password of the corre-
sponding user.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "password".

406 Data Federator User Guide


Configuring connectors to sources of data
Configuring Informix connectors 12
Type Parameter Description
separated list schema
Defines the schema names or pat-
of values (se
mi-colon)
terns that you access.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "schema".

You can specify several schemas.


You can also specify wildcards for
schemas.

example:

'T%' = T followed by zero or


more characters
'S_' = S followed by any
single character

BOOLEAN setFetchSize
Defines if the connector should set
the default fetch size.

Predefined val sourceType


Identifies the version of the database.
ue
For Informix, possible values are:
• Informix XPS 8.4
• Informix XPS 8.5

Data Federator User Guide 407


12 Configuring connectors to sources of data
Configuring Informix connectors

Type Parameter Description


Predefined val sqlStringType
Defines the syntax used to generate
ue
the SQL string. This parameter lets
Data Federator Query Server trans-
late the queries expressed in the
Data Federator SQL Syntax to the
syntax specific to the database.

According to the query language of


the database, the possible list of val-
ues is:

sql92, sql99, jdbc3

example:

jdbc3 format:

SELECT * from {oj T1 LEFT OUT


ERJOIN T2 on T1.A1=T2.A2}

SQL92 format:

SELECT * from T1 LEFTOUTERJOIN


T2 ON T1.A1=T2.A2

Predefined val transactionIsola


tion
Attempts to change the transaction
ue
isolation level for connections to the
database. The transactionIsolation
parameter is used by the connector
to set the transaction isolation level
of each connection made to the un-
derlying database.

The Data Federator documentation


has more details about the transac
tionIsolation property.

408 Data Federator User Guide


Configuring connectors to sources of data
Configuring Informix connectors 12
Type Parameter Description
STRING user
Defines the username of the
database account.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "user".

example:

ALTER RESOURCE "jdbc.myre


source" SET "user" 'newuser'

True/Yes supportsboolean
Specifies if the middleware or
False/No
database supports BOOLEANS as
first-class objects. The default value
for this parameter depends on the
database. For pre-defined resources,
this parameter is already set to its
correct value, but you can override
it. Default: No.

integer maxRows
lets you define the maximum number
of rows you want returned from the
database

(default value 0: no limit)

boolean allowPartialRe
sults
must be yes to allow partial results if
maxRows is set

(yes/no, true/false, default value is


no)

Related Topics
• transactionIsolation property on page 478

Data Federator User Guide 409


12 Configuring connectors to sources of data
Configuring MySQL connectors

Configuring MySQL connectors

Configuring MySQL connectors

In order to configure a connector for MySQL, you must install JDBC drivers.
These drivers are usually available from the MySQL website.
1. Download the JDBC driver for MySQL.
You get a driver in the form of a .jar file or several .jar files.
http://dev.mysql.com/downloads/connector/j/5.1.html
2. Copy the driver .jar files to data-federator-install-dir/LeSe
lect/drivers

This directory is the default directory where Data Federator looks for
JDBC drivers. If you want to put the drivers in a different directory, you
must enter this directory name in the corresponding resource.

When Data Federator starts, it loads your JDBC drivers, and it can access
the corresponding JDBC data source.

Related Topics
• Pointing a resource to an existing JDBC driver on page 460

410 Data Federator User Guide


Configuring connectors to sources of data
Configuring MySQL connectors 12

Specific collation parameters for MySQL

Property Value Property Description


datasourceSortCollation the source collation for
sort operations
datasourceCompColla- the source collation for
tion comparisons
datesourceBinaryColla- the source collation for
tion binary comparisons

Resource properties for setting collation parameters on MySQL


Three JDBC resource properties let you force the specific collation to use
for MySQL, even if your source of data has a different default collation.

To force a collation value for MySQL, change the value of the datasource
SortCollation, datasourceCompCollation or datesourceBinaryCollation JDBC
resource properties.

Example: Setting specific collation parameters for MySQL


datasourceCompCollation="utf8_swedish_ci "

Related Topics
• Collation in Data Federator on page 495
• How Data Federator decides how to push queries to sources when using
binary collation on page 500
• List of JDBC resource properties on page 461
• Managing resources using Data Federator Administrator on page 483

Data Federator User Guide 411


12 Configuring connectors to sources of data
Configuring Oracle connectors

Configuring Oracle connectors

Configuring Oracle connectors

In order to configure a connector for Oracle, you must install JDBC drivers.
These drivers are usually available from the Oracle website.
1. Download the JDBC driver for Oracle.
You get a driver in the form of a .jar file or several .jar files.
http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/index.html
2. Copy the driver .jar files to data-federator-install-dir/LeSe
lect/drivers

This directory is the default directory where Data Federator looks for
JDBC drivers. If you want to put the drivers in a different directory, you
must enter this directory name in the corresponding resource.

When Data Federator starts, it loads your JDBC drivers, and it can access
the corresponding JDBC data source.

Related Topics
• Pointing a resource to an existing JDBC driver on page 460

Specific collation parameters for Oracle

Resource properties for setting collation parameters on Oracle


A JDBC resource property lets you force the specific collation to use for
Oracle, even if your source of data has a different default collation.

The default setting issessionProperties:NLS_TERRITORY=AMERICA;NLS_LAN


GUAGE=ENGLISH;NLS_SORT=BINARY;NLS_COMP=BINARY

The NLS_COMP and NLS_SORT parameters are used by Oracle to define


the collation for comparison and sort operations. By default both NLS_COMP
and NLS_SORT are set to BINARY.

412 Data Federator User Guide


Configuring connectors to sources of data
Configuring Oracle connectors 12
To force a specific collation on Oracle, change the value of the sessionProp
erties JDBC resource property.

Related Topics
• Collation in Data Federator on page 495
• How Data Federator decides how to push queries to sources when using
binary collation on page 500
• List of JDBC resource properties on page 461
• Managing resources using Data Federator Administrator on page 483

How Data Federator transforms wildcards in names


of Oracle tables

Certain wildcards in Oracle table names do not appear in Data Federator.


For example, when searching for all tables in an object type, a table with the
name abc/table1 appears as abc?22ftable1.

The table below describes how wildcards appear in Data Federator:

Data Federator User Guide 413


12 Configuring connectors to sources of data
Configuring Netezza connectors

The Oracle character Is replaced by ...


...

%
?225

/
?22f

\
?25c

.
?22e

#
?223

?
?23f

Configuring Netezza connectors

Supported versions of Netezza

This version of Data Federator supports Netezza NPS Server versions 3.0
or 3.1.

To let Data Federator connect to your Netezza NPS Server database, you
must install Netezza ODBC driver (versions 3.0 or 3.1).

Configuring Netezza connectors

In order to configure a connector for Netezza, you must install an ODBC


driver and create an entry in your operating system's ODBC data source
administrator.
1. Install the ODBC driver for Netezza.
2. Open your operating system's "ODBC Data Source Administrator".

414 Data Federator User Guide


Configuring connectors to sources of data
Configuring Netezza connectors 12
To open the "ODBC Data Source Administrator" on a standard installation
of Windows, click Start > Programs > Administrative Tools > Data
Sources (ODBC).

3. Create a DSN (Data Source Name entry) to point to your database.


Please refer to the vendor documentation for details on this configuration
step.

Provide the following information to name users of Data Federator


Designer who want to create a datasource for Netezza.

Parameter Description
Data Source The name that you defined in your operating system's data
Name source manager, in the field Data Source Name.

List of Netezza resource properties

The table below lists the properties that you can configure in Netezza
resources.

Data Federator User Guide 415


12 Configuring connectors to sources of data
Configuring Netezza connectors

Type Parameter Description


BOOLEAN addCatalog
Set to True if you want to see the
catalog as a prefix for table names.

BOOLEAN addSchema
Set to True if you want to see the
schema as a prefix for table names.

Separated list allowTableType


Lists table types to take into consid-
of values (se
mi-colon)
eration metadata that is retrieved by
the underlying database.

Special case: if this attribute is empty


(' '), all table types are allowed.

example:

'TABLE;SYSTEM TABLE;VIEW'

one of {config authentication


configuredIdentity: authentication in
uredIdentity, Mode
callerImperson
the database is done using the value
ation, princi
of the parameters username and
palMapping} password.

callerImpersonation: authentication
in the database is done using the
same credential as the one used to
connect to the Query Server.

principalMapping: authentication in
the database is done using a map-
ping from Data Federator user to the
user of the database. In this case,
the parameter loginDomain should
be set to a registered login domain.

416 Data Federator User Guide


Configuring connectors to sources of data
Configuring Netezza connectors 12
Type Parameter Description
Mapping capabilities
Defines what the data source sup-
ports in terms of relational operators.
It lists all capabilities supported by
the database.

Depending on the supported relation-


al operators, Data Federator man-
ages the queries differently. For ex-
ample if you specify outer
join=false, that tells Data Federator
Query Server to execute this operator
within Data Federator Query Server
engine.

An example is: isjdbc=true;outer


join=false;rightouterjoin=true.

The Data Federator documentation


has a full list of capabilities.

Default value defaultFetchSize


This parameter gives the driver a hint
as to the number of rows that should
be fetched from the database when
more rows are needed.

If the value specified is negative, then


the hint is ignored.

The default value is -1.

BOOLEAN ignoreKeys
Set to True if you do not want the
connector to query the data source
to get keys and foreign keys metada-
ta.

BOOLEAN isPasswordEn
crypted
Set to True if the password is encrypt-
ed. The password is defined by the
password parameter.

Data Federator User Guide 417


12 Configuring connectors to sources of data
Configuring Netezza connectors

Type Parameter Description


INTEGER maxConnection
IdleTime
The maximum time an idle connec-
tion is kept in the pool of connections.
Unit is milliseconds.

The default is 60000 ms (60 s).

0 means no limit.

INTEGER nbPreparedState
mentsPerQuery
Maximum number of prepared state-
ments in the query pool.

pre-defined networkLayer
Specifies the network layer of the
value
database that you want to connect
to. When you create a resource,
choose the value that corresponds
to the database to which you are
connecting.

For Netezza, this should be ODBC.

STRING password
Defines the password of the corre-
sponding user.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "password".

418 Data Federator User Guide


Configuring connectors to sources of data
Configuring Netezza connectors 12
Type Parameter Description
separated list schema
Defines the schema names or pat-
of values (se
mi-colon)
terns that you access.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "schema".

You can specify several schemas.


You can also specify wildcards for
schemas.

example:

'T%' = T followed by zero or


more characters
'S_' = S followed by any
single character

BOOLEAN setFetchSize
Defines if the connector should set
the default fetch size.

Predefined val sourceType


Identifies the version of the database.
ue
For Netezza, possible values are:
• Netezza Server

Data Federator User Guide 419


12 Configuring connectors to sources of data
Configuring Netezza connectors

Type Parameter Description


Predefined val sqlStringType
Defines the syntax used to generate
ue
the SQL string. This parameter lets
Data Federator Query Server trans-
late the queries expressed in the
Data Federator SQL Syntax to the
syntax specific to the database.

According to the query language of


the database, the possible list of val-
ues is:

sql92, sql99, jdbc3

example:

jdbc3 format:

SELECT * from {oj T1 LEFT OUT


ERJOIN T2 on T1.A1=T2.A2}

SQL92 format:

SELECT * from T1 LEFTOUTERJOIN


T2 ON T1.A1=T2.A2

Predefined val transactionIsola


tion
Attempts to change the transaction
ue
isolation level for connections to the
database. The transactionIsolation
parameter is used by the connector
to set the transaction isolation level
of each connection made to the un-
derlying database.

The Data Federator documentation


has more details about the transac
tionIsolation property.

420 Data Federator User Guide


Configuring connectors to sources of data
Configuring Netezza connectors 12
Type Parameter Description
STRING user
Defines the username of the
database account.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "user".

example:

ALTER RESOURCE "jdbc.myre


source" SET "user" 'newuser'

True/Yes supportsboolean
Specifies if the middleware or
False/No
database supports BOOLEANS as
first-class objects. The default value
for this parameter depends on the
database. For pre-defined resources,
this parameter is already set to its
correct value, but you can override
it. Default: No.

integer maxRows
lets you define the maximum number
of rows you want returned from the
database

(default value 0: no limit)

boolean allowPartialRe
sults
must be yes to allow partial results if
maxRows is set

(yes/no, true/false, default value is


no)

Related Topics
• transactionIsolation property on page 478

Data Federator User Guide 421


12 Configuring connectors to sources of data
Configuring Progress connectors

Configuring Progress connectors

Configuring connectors for Progress

Summary of the connection from Data Federator to Progress


In order to use a Progress connector, you must install the Progress
middleware and a driver that lets Data Federator connect to the Progress
middleware.

Details of the connection from Data Federator to Progress


In order to connect to Progress databases, you must do the following:
• install the OEM SequeLink Server for ODBC Socket 5.5 from the Data
Federator Drivers DVD (directory drivers/sl550socket). For Windows
platforms only.
• install the OEM ODBC driver for Progress OpenEdge (Data Federator
DataDirect Progress OpenEdge ODBC driver) using the Data Federator
Drivers installer. For Windows platforms only.
• install the Progress OpenEdge 10.0B client
• configure DSN entries to point to your Progress databases

Data Federator loads the JDBC driver for Progress OpenEdge. The JDBC
driver for Progress connects to the OEM SequeLink Server. The OEM
SequeLink Server connects to Data Federator DataDirect Progress OpenEdge
ODBC driver. The ODBC driver connects to the Progress OpenEdge 10.0B
client. Finally, the Progress OpenEdge 10.0B client connects to the Progress
database.

The OEM SequeLink Server and the Data Federator DataDirect Progress
OpenEdge ODBC driver should be on the same Windows machine as the
Progress OpenEdge 10.0B client.

The connection from the Progress OpenEdge 10.0B client to the Progress
database is covered in your Progress documentation.

422 Data Federator User Guide


Configuring connectors to sources of data
Configuring Progress connectors 12

Figure 12-1: Architecture of an connection from Data Federator to Progress

Related Topics
• Installing OEM SequeLink Server for Progress connections on page 423
• Configuring middleware for Progress connections on page 423

Installing OEM SequeLink Server for Progress


connections

In order to bridge the JDBC driver for Progress to the Data Federator
DataDirect Progress OpenEdge ODBC driver, you must install the SequeLink
Server for ODBC Socket 5.5 OEM version. The SequeLink Server installation
is provided on the Data Federator DVD.
• Run the following script from the Data Federator DVD.
drivers/sl550socket/oemsetup.bat

You can find documentation on the SequeLink Server in the directory


drivers/sl550socket/doc on the Data Federator DVD.

Configuring middleware for Progress connections


1. Install a Progress OpenEdge 10.0B client. See the Progress
documentation for details.

Data Federator User Guide 423


12 Configuring connectors to sources of data
Configuring Progress connectors

2. Set your environment variables to point to the Progress OpenEdge


installation as follows.

DLC=C:\Progress\OpenEdge

PATH=%PATH%;%DLC%\bin

3. Run the Data Federator Driver installer and choose an install set that
contains the connector driver for Progress OpenEdge 10.0B.
4. Open your operating system's "ODBC Data Source Administrator".
On Windows, you configure DSN entries in the "ODBC Data Source
Administrator".

To open the "ODBC Data Source Administrator" on a standard installation


of Windows, click Start > Programs > Administrative Tools > Data
Sources (ODBC).

5. Add a DSN entry of the type "Data Federator DataDirect Progress


OpenEdge", and configure it as follows.

For the parameter... Enter...

a name of your choice: your-


progress-data-source-name
Data source name
for example, accounts-progress-
data-source

a description of your choice, to de-


Description
scribe this Progress data source

the name of the machine where


Host
your Progress database is installed

Port the port of the Progress database

424 Data Federator User Guide


Configuring connectors to sources of data
Configuring Progress connectors 12
For the parameter... Enter...

the name of the Progress database


Database name
to which you want to connect

a username that has at least read


privileges on the Progress database
User to which you want to connect

for example, progress-username

6. Use the Data Federator DVD and install the SequeLink Server OEM
version (drivers/sl550socket/oemsetup.bat).
The SequeLink Server is a bridge between the JDBC driver for Progress
and the Data Federator DataDirect OpenEdge driver.

7. Open the administration interface of the SequeLink Server.


8. Add a data source, and configure it as follows.

For the parameter... Enter...

a name of your choice: your-se


data source name (in tree list)
quelink-data-source-name

the text DNS= followed by the name


that you chose in the DSN entry for
your ODBC driver, your-
the attribute DataSourceSOCOD progress-data-source-name
BCConnStr, in the term DSN For example, for the attribute Data
SourceSOCODBCConnStr, enter
DSN=accounts-progress-data-
source

Data Federator User Guide 425


12 Configuring connectors to sources of data
Configuring SAS connectors

When you complete the above steps, any connections that users create of
type jdbc.progress.openedge will connect to Progress through the SequeLink
Server.

Give the following information to users of Data Federator Designer that want
to connect to this Progress server.

Parameter Description

The name you defined as the data


source name in the administration
SequeLink data source name
interface of SequeLink Server:your-
sequelink-data-source-name

The name of the host where you in-


SequeLink server host name
stalled the SequeLink Server.

The port of the host where you in-


SequeLink server port
stalled the SequeLink Server.

Related Topics
• Installing OEM SequeLink Server for Progress connections on page 423

Configuring SAS connectors

Configuring connectors for SAS

In order to use a SAS connector, you must install a driver that lets Data
Federator connect to a SAS/SHARE server.

A SAS/SHARE server is a server that allows you to connect to SAS data


sets. For more information about SAS/SHARE, see the SAS website.

You can install the driver as you would install any other JDBC driver for Data
Federator.

426 Data Federator User Guide


Configuring connectors to sources of data
Configuring SAS connectors 12
Related Topics
• Pointing a resource to an existing JDBC driver on page 460
• http://www.sas.com/products/share/index.html

Supported versions of SAS

This version of Data Federator supports SAS with the SAS/SHARE server
version 9.1 or higher.

Installing drivers for SAS connections

In order to connect to SAS sources from Data Federator, you must install a
SAS/SHARE driver for JDBC.

The SAS/SHARE driver lets Data Federator connect to a SAS/SHARE server.


The SAS/SHARE server accesses your SAS data sets.

The SAS/SHARE driver for JDBC should be on the same machine as Data
Federator.

To set up your SAS/SHARE server, see your SAS documentation.

Figure 12-2: Architecture of an installation from Data Federator to SAS

• Install a driver for a JDBC connection to SAS, as you would install a


regular JDBC driver in Data Federator.
Users can now add a datasource of type SAS.

Data Federator User Guide 427


12 Configuring connectors to sources of data
Configuring SAS connectors

Optimizing SAS queries by ordering tables in the


from clause by their cardinality

SAS is sensitive to the ordering of tables in the from clause. For the fastest
response from the SAS/Share server, the table names in the from should
appear in descending order with respect to their cardinalities.

You can ensure that Data Federator generates tables in this order by keeping
the statistics in Data Federator accurate. You can do this using Data
Federator Administrator.

To control the order of tables manually, you can also set the sasWeights
resource property for the SAS JDBC connector.

Related Topics
• Managing statistics with Data Federator Administrator on page 394
• Managing resources and properties of connectors on page 483
• List of JDBC resource properties for SAS on page 428

List of JDBC resource properties for SAS

The table below lists the properties that you can configure in JDBC resources.

428 Data Federator User Guide


Configuring connectors to sources of data
Configuring SAS connectors 12
Parameter Description
sasWeights
a mapping of table names to weights used to order
the tables in the from clause when generating a
query in the SAS dialect
Tables in the from clause are ordered according
to weights, in descending order. The weight is by
default set to the table cardinality but it can be
overridden using this parameter. This ordering is
done only for inner joins.

A table name here is the name as exported by the


connector. A weight is a long value.

Example

EMPLOYEE=16;DEPARTMENT=4

Using this setting, the EMPLOYEE table will ap-


pear before the DEPARTMENT table when
pushing a query on SAS with a join of this two ta-
bles.

If this parameter is not specified, or if no weight


is defined for a given table, then the weight is by
default the cardinality of the table (as set in Query
Server).
If a table name is unknown, it is simply ignored.

This parameter is taken into account only when


the parameter sqlStringType is set to sas.

Related Topics
• List of JDBC resource properties on page 461

Data Federator User Guide 429


12 Configuring connectors to sources of data
Configuring SQL Server connectors

Configuring SQL Server connectors

Configuring SQL Server connectors

In order to configure a connector for SQL Server, you must install JDBC
drivers. These drivers are usually available from the SQL Server website.
1. Download the JDBC driver for SQL Server.
You get a driver in the form of a .jar file or several .jar files.

Note:
The recommended driver for SQL Server 2000 is SQL Server JDBC driver
SP3 (the version is 2.2.0040)

The recommended driver for SQL Server 2005 is version v1.0.809.102.

http://www.microsoft.com/downloads/details.aspx?familyid=07287b11-
0502-461a-b138-2aa54bfdc03a&displaylang=en
2. Copy the driver .jar files to data-federator-install-dir/LeSe
lect/drivers

This directory is the default directory where Data Federator looks for
JDBC drivers. If you want to put the drivers in a different directory, you
must enter this directory name in the corresponding resource.

When Data Federator starts, it loads your JDBC drivers, and it can access
the corresponding JDBC data source.

Related Topics
• Pointing a resource to an existing JDBC driver on page 460

430 Data Federator User Guide


Configuring connectors to sources of data
Configuring SQL Server connectors 12

Specific collation parameters for SQL Server

Property Value Property Description


datasourceSortCollation the source collation for
sort operations
datasourceCompColla- the source collation for
tion comparisons
datesourceBinaryColla- the source collation for
tion binary comparisons

Resource properties for setting collation parameters on SQL Server


Three JDBC resource properties let you force the specific collation to use
for SQL Server, even if your source of data has a different default collation.

To force a collation value for SQL Server, change the value of the datasource
SortCollation, datasourceCompCollation or datesourceBinaryCollation JDBC
resource properties.

Example: Setting specific collation parameters for SQL Server


• datasourceBinaryCollation="Latin1_general_bin"
• datasourceSortCollation="french_ci_ai"

Related Topics
• Collation in Data Federator on page 495
• How Data Federator decides how to push queries to sources when using
binary collation on page 500
• List of JDBC resource properties on page 461
• Managing resources using Data Federator Administrator on page 483

Data Federator User Guide 431


12 Configuring connectors to sources of data
Configuring Sybase connectors

Configuring Sybase connectors

Supported versions of Sybase

This version of Data Federator supports Sybase Adaptive Server Enterprise


versions 12.5 or 15.0.

To let Data Federator connect to your Sybase Adaptive Server Enterprise


database, you must have:
• Data Federator Query Server and Sybase Open Client library installed
on the same machine
• your library path set so that Data Federator can find the Sybase Open
Client library

Configuring Sybase connectors

In order to configure a connector for Sybase, you must do the following:


• Install middleware for Sybase.

This middleware may be a driver, a client application, or a combination


of both.
• Configure the middleware to point to your database.
• Configure Data Federator to point to the middleware.

The middleware comes with Sybase and lets Data Federator talk to the
database. For details on installing it, see the Sybase documentation.

Once you install and configure the middleware, you can use Data Federator
to connect to Sybase data sources.

432 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase connectors 12

Installing middleware to let Data Federator connect


to Sybase

To let Data Federator connect to your Sybase database, you must have the
following configuration.
• Data Federator Query Server and Sybase Open Client library, 12.5
or 15.0, must be installed on the same machine
• the Sybase Open Client library must be included in the environment
variable the defines the library path

On Windows, this variable is called PATH.

Make sure that your PATH variable contains: C:\sybase\Shared\Sybase


Central 4.3;C:\sybase\ua\bin;C:\sybase\OCS-
15_0\lib3p;C:\sybase\OCS-15_0\dll;C:\sybase\OCS-15_0\bin;

On Linux and Solaris, this variable is called LD_LIBRARY_PATH:

$ export SYBASE=/opt/sybase
$ export SYBASE_OCS=OCS-15_0
$ export LD_LIBRARY_PATH=$LD_LI
BRARY_PATH:${SYBASE}/${SYBASE_OCS}/lib:${SYBASE}/${SYBASE_OCS}/lib3p

On AIX, this variable is called LIB_PATH:

$ export SYBASE=/opt/sybase
$ export SYBASE_OCS=OCS-15_0
$ export
LIB_PATH=$LIB_PATH:${SYBASE}/${SYBASE_OCS}/lib:${SYBASE}/${SYBASE_OCS}/lib3p

1. Make sure Sybase Open Client is configured to connect to your Sybase


server, where the Server name is defined as sybase-server-name.
For example, you can install Open Client Directory Server Editor
(dsedit). Then, use dsedit to add a Server Object and choose a name
for this object, sybase-server-name.

For details on installing the Sybase middleware, see your vendor's


documentation. http://infocenter.sybase.com/help/index.jsp

Data Federator User Guide 433


12 Configuring connectors to sources of data
Configuring Sybase connectors

2. In Data Federator Designer, when adding a datasource to your Sybase


database, use sybase-server-name as the value of the Server name
field.
Give the following information to users of Data Federator Designer that want
to connect to this Sybase server.

Parameter Description

the name defined in the Server name


Server name field of the Server object in Sybase
Open Client: sybase-server-name

the name of the database running on


Default database the Sybase server: sybase-
database

the name of the user account used


Password to connect to the Sybase database:
sybase-password

the password for the user account


User Name used to connect to the Sybase
database: sybase-username

Related Topics
• http://infocenter.sybase.com/help/index.jsp

List of Sybase resource properties

The table below lists the properties that you can configure in Sybase
resources.

434 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase connectors 12
Type Parameter Description
BOOLEAN addCatalog
Set to True if you want to see the
catalog as a prefix for table names.

BOOLEAN addSchema
Set to True if you want to see the
schema as a prefix for table names.

Separated list allowTableType


Lists table types to take into consid-
of values (se
mi-colon)
eration metadata that is retrieved by
the underlying database.

Special case: if this attribute is empty


(' '), all table types are allowed.

example:

'TABLE;SYSTEM TABLE;VIEW'

one of {config authentication


configuredIdentity: authentication in
uredIdentity, Mode
callerImperson
the database is done using the value
ation, princi
of the parameters username and
palMapping} password.

callerImpersonation: authentication
in the database is done using the
same credential as the one used to
connect to the Query Server.

principalMapping: authentication in
the database is done using a map-
ping from Data Federator user to the
user of the database. In this case,
the parameter loginDomain should
be set to a registered login domain.

Data Federator User Guide 435


12 Configuring connectors to sources of data
Configuring Sybase connectors

Type Parameter Description


Mapping capabilities
Defines what the data source sup-
ports in terms of relational operators.
It lists all capabilities supported by
the database.

Depending on the supported relation-


al operators, Data Federator man-
ages the queries differently. For ex-
ample if you specify outer
join=false, that tells Data Federator
Query Server to execute this operator
within Data Federator Query Server
engine.

An example is: isjdbc=true;outer


join=false;rightouterjoin=true.

The Data Federator documentation


has a full list of capabilities.

STRING database
Sybase only

the name of the default database

Default value defaultFetchSize


This parameter gives the driver a hint
as to the number of rows that should
be fetched from the database when
more rows are needed.

If the value specified is negative, then


the hint is ignored.

The default value is -1.

BOOLEAN ignoreKeys
Set to True if you do not want the
connector to query the data source
to get keys and foreign keys metada-
ta.

436 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase connectors 12
Type Parameter Description
BOOLEAN isPasswordEn
crypted
Set to True if the password is encrypt-
ed. The password is defined by the
password parameter.

INTEGER maxConnection
IdleTime
The maximum time an idle connec-
tion is kept in the pool of connections.

Unit is milliseconds.

The default is 60000 ms (60 s).

0 means no limit.

INTEGER nbPreparedState
mentsPerQuery
Maximum number of prepared state-
ments in the query pool.

pre-defined networkLayer
Specifies the network layer of the
value
database that you want to connect
to. When you create a resource,
choose the value that corresponds
to the database to which you are
connecting.

For Sybase, this should be


SybaseOpenClient.

STRING password
Defines the password of the corre-
sponding user.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "password".

Data Federator User Guide 437


12 Configuring connectors to sources of data
Configuring Sybase connectors

Type Parameter Description


separated list schema
Defines the schema names or pat-
of values (se
mi-colon)
terns that you access.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "schema".

You can specify several schemas.


You can also specify wildcards for
schemas.

example:

'T%' = T followed by zero or


more characters
'S_' = S followed by any
single character

BOOLEAN setFetchSize
Defines if the connector should set
the default fetch size.

438 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase connectors 12
Type Parameter Description
BOOLEAN setQuotedIdentifi
er
Sybase only

specifies if the character quote (") is


used around identifiers

If true, Data Federator puts quotes


around table and column identifiers
when it sends queries to Sybase.

If false, Data Federator does not put


quotes around identifiers, but this
means that identifiers with any com-
plex characters will fail.

The setting setQuotedIdentifi


er=true corresponds to the state-
ment set quoted_identifier=on
in Sybase.

Predefined val sourceType


Identifies the version of the database.
ue
For Sybase, possible values are:
• Sybase Adaptive Server 12
• Sybase Adaptive Server 15

Data Federator User Guide 439


12 Configuring connectors to sources of data
Configuring Sybase connectors

Type Parameter Description


Predefined val sqlStringType
Defines the syntax used to generate
ue
the SQL string. This parameter lets
Data Federator Query Server trans-
late the queries expressed in the
Data Federator SQL Syntax to the
syntax specific to the database.

According to the query language of


the database, the possible list of val-
ues is:

sql92, sql99, jdbc3

example:

jdbc3 format:

SELECT * from {oj T1 LEFT OUT


ERJOIN T2 on T1.A1=T2.A2}

SQL92 format:

SELECT * from T1 LEFTOUTERJOIN


T2 ON T1.A1=T2.A2

Predefined val transactionIsola


tion
Attempts to change the transaction
ue
isolation level for connections to the
database. The transactionIsolation
parameter is used by the connector
to set the transaction isolation level
of each connection made to the un-
derlying database.

The Data Federator documentation


has more details about the transac
tionIsolation property.

440 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase connectors 12
Type Parameter Description
STRING user
Defines the username of the
database account.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "user".

example:

ALTER RESOURCE "jdbc.myre


source" SET "user" 'newuser'

True/Yes supportsboolean
Specifies if the middleware or
False/No
database supports BOOLEANS as
first-class objects. The default value
for this parameter depends on the
database. For pre-defined resources,
this parameter is already set to its
correct value, but you can override
it. Default: No.

integer maxRows
lets you define the maximum number
of rows you want returned from the
database

(default value 0: no limit)

boolean allowPartialRe
sults
must be yes to allow partial results if
maxRows is set

(yes/no, true/false, default value is


no)

Related Topics
• transactionIsolation property on page 478

Data Federator User Guide 441


12 Configuring connectors to sources of data
Configuring Sybase IQ connectors

Configuring Sybase IQ connectors

Supported versions of Sybase IQ

This version of Data Federator supports Sybase Adaptive Server IQ versions


12.6 or 12.7.

To let Data Federator connect to your Sybase Adaptive Server IQ database,


you must install Sybase ODBC 9 for Adaptive Server IQ.

Configuring Sybase IQ connectors

In order to configure a connector for Sybase IQ, you must install an ODBC
driver and create an entry in your operating system's ODBC data source
administrator.
1. Install the ODBC driver for Sybase IQ.
2. Open your operating system's "ODBC Data Source Administrator".
To open the "ODBC Data Source Administrator" on a standard installation
of Windows, click Start > Programs > Administrative Tools > Data
Sources (ODBC).

3. Create a DSN (Data Source Name entry) to point to your database.


Please refer to the vendor documentation for details on this configuration
step.

Provide the following information to name users of Data Federator


Designer who want to create a datasource for Sybase IQ.

442 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase IQ connectors 12
Parameter Description
Data Source The name that you defined in your operating system's data
Name source manager, in the field Data Source Name.

List of Sybase IQ resource properties

The table below lists the properties that you can configure in Sybase IQ
resources.

Data Federator User Guide 443


12 Configuring connectors to sources of data
Configuring Sybase IQ connectors

Type Parameter Description


BOOLEAN addCatalog
Set to True if you want to see the
catalog as a prefix for table names.

BOOLEAN addSchema
Set to True if you want to see the
schema as a prefix for table names.

Separated list allowTableType


Lists table types to take into consid-
of values (se
mi-colon)
eration metadata that is retrieved by
the underlying database.

Special case: if this attribute is empty


(' '), all table types are allowed.

example:

'TABLE;SYSTEM TABLE;VIEW'

one of {config authentication


configuredIdentity: authentication in
uredIdentity, Mode
callerImperson
the database is done using the value
ation, princi
of the parameters username and
palMapping} password.

callerImpersonation: authentication
in the database is done using the
same credential as the one used to
connect to the Query Server.

principalMapping: authentication in
the database is done using a map-
ping from Data Federator user to the
user of the database. In this case,
the parameter loginDomain should
be set to a registered login domain.

444 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase IQ connectors 12
Type Parameter Description
Mapping capabilities
Defines what the data source sup-
ports in terms of relational operators.
It lists all capabilities supported by
the database.

Depending on the supported relation-


al operators, Data Federator man-
ages the queries differently. For ex-
ample if you specify outer
join=false, that tells Data Federator
Query Server to execute this operator
within Data Federator Query Server
engine.

An example is: isjdbc=true;outer


join=false;rightouterjoin=true.

The Data Federator documentation


has a full list of capabilities.

STRING database
Sybase only

the name of the default database

Default value defaultFetchSize


This parameter gives the driver a hint
as to the number of rows that should
be fetched from the database when
more rows are needed.

If the value specified is negative, then


the hint is ignored.

The default value is -1.

BOOLEAN ignoreKeys
Set to True if you do not want the
connector to query the data source
to get keys and foreign keys metada-
ta.

Data Federator User Guide 445


12 Configuring connectors to sources of data
Configuring Sybase IQ connectors

Type Parameter Description


BOOLEAN isPasswordEn
crypted
Set to True if the password is encrypt-
ed. The password is defined by the
password parameter.

INTEGER maxConnection
IdleTime
The maximum time an idle connec-
tion is kept in the pool of connections.

Unit is milliseconds.

The default is 60000 ms (60 s).

0 means no limit.

INTEGER nbPreparedState
mentsPerQuery
Maximum number of prepared state-
ments in the query pool.

pre-defined networkLayer
Specifies the network layer of the
value
database that you want to connect
to. When you create a resource,
choose the value that corresponds
to the database to which you are
connecting.

For Sybase IQ, this should be ODBC.

STRING password
Defines the password of the corre-
sponding user.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "password".

446 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase IQ connectors 12
Type Parameter Description
separated list schema
Defines the schema names or pat-
of values (se
mi-colon)
terns that you access.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "schema".

You can specify several schemas.


You can also specify wildcards for
schemas.

example:

'T%' = T followed by zero or


more characters
'S_' = S followed by any
single character

BOOLEAN setFetchSize
Defines if the connector should set
the default fetch size.

Data Federator User Guide 447


12 Configuring connectors to sources of data
Configuring Sybase IQ connectors

Type Parameter Description


BOOLEAN setQuotedIdentifi
er
Sybase only

specifies if the character quote (") is


used around identifiers

If true, Data Federator puts quotes


around table and column identifiers
when it sends queries to Sybase.

If false, Data Federator does not put


quotes around identifiers, but this
means that identifiers with any com-
plex characters will fail.

The setting setQuotedIdentifi


er=true corresponds to the state-
ment set quoted_identifier=on
in Sybase.

Predefined val sourceType


Identifies the version of the database.
ue
For Sybase, possible values are:
• Sybase ASIQ 12

448 Data Federator User Guide


Configuring connectors to sources of data
Configuring Sybase IQ connectors 12
Type Parameter Description
Predefined val sqlStringType
Defines the syntax used to generate
ue
the SQL string. This parameter lets
Data Federator Query Server trans-
late the queries expressed in the
Data Federator SQL Syntax to the
syntax specific to the database.

According to the query language of


the database, the possible list of val-
ues is:

sql92, sql99, jdbc3

example:

jdbc3 format:

SELECT * from {oj T1 LEFT OUT


ERJOIN T2 on T1.A1=T2.A2}

SQL92 format:

SELECT * from T1 LEFTOUTERJOIN


T2 ON T1.A1=T2.A2

Predefined val transactionIsola


tion
Attempts to change the transaction
ue
isolation level for connections to the
database. The transactionIsolation
parameter is used by the connector
to set the transaction isolation level
of each connection made to the un-
derlying database.

The Data Federator documentation


has more details about the transac
tionIsolation property.

Data Federator User Guide 449


12 Configuring connectors to sources of data
Configuring Sybase IQ connectors

Type Parameter Description


STRING user
Defines the username of the
database account.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "user".

example:

ALTER RESOURCE "jdbc.myre


source" SET "user" 'newuser'

True/Yes supportsboolean
Specifies if the middleware or
False/No
database supports BOOLEANS as
first-class objects. The default value
for this parameter depends on the
database. For pre-defined resources,
this parameter is already set to its
correct value, but you can override
it. Default: No.

integer maxRows
lets you define the maximum number
of rows you want returned from the
database

(default value 0: no limit)

boolean allowPartialRe
sults
must be yes to allow partial results if
maxRows is set

(yes/no, true/false, default value is


no)

Related Topics
• transactionIsolation property on page 478

450 Data Federator User Guide


Configuring connectors to sources of data
Configuring Teradata connectors 12
Configuring Teradata connectors

Supported versions of Teradata

This version of Data Federator supports Teradata V2R5.1 or V2R6.

To let Data Federator connect to your Teradata database, you must install
a Teradata ODBC driver (versions 3.04 or 3.05).

Configuring Teradata connectors

In order to configure a connector for Teradata, you must install an ODBC


driver and create an entry in your operating system's ODBC data source
administrator.
1. Install the ODBC driver for Teradata.
2. Open your operating system's "ODBC Data Source Administrator".
To open the "ODBC Data Source Administrator" on a standard installation
of Windows, click Start > Programs > Administrative Tools > Data
Sources (ODBC).

3. Create a DSN (Data Source Name entry) to point to your database.


Please refer to the vendor documentation for details on this configuration
step.

Provide the following information to name users of Data Federator


Designer who want to create a datasource for Teradata.

Data Federator User Guide 451


12 Configuring connectors to sources of data
Configuring Teradata connectors

Parameter Description
Data Source The name that you defined in your operating system's data
Name source manager, in the field Data Source Name.

List of Teradata resource properties

The table below lists the properties that you can configure in Teradata
resources.

452 Data Federator User Guide


Configuring connectors to sources of data
Configuring Teradata connectors 12
Type Parameter Description
BOOLEAN addCatalog
Set to True if you want to see the
catalog as a prefix for table names.

BOOLEAN addSchema
Set to True if you want to see the
schema as a prefix for table names.

Separated list allowTableType


Lists table types to take into consid-
of values (se
mi-colon)
eration metadata that is retrieved by
the underlying database.

Special case: if this attribute is empty


(' '), all table types are allowed.

example:

'TABLE;SYSTEM TABLE;VIEW'

one of {config authentication


configuredIdentity: authentication in
uredIdentity, Mode
callerImperson
the database is done using the value
ation, princi
of the parameters username and
palMapping} password.

callerImpersonation: authentication
in the database is done using the
same credential as the one used to
connect to the Query Server.

principalMapping: authentication in
the database is done using a map-
ping from Data Federator user to the
user of the database. In this case,
the parameter loginDomain should
be set to a registered login domain.

Data Federator User Guide 453


12 Configuring connectors to sources of data
Configuring Teradata connectors

Type Parameter Description


Mapping capabilities
Defines what the data source sup-
ports in terms of relational operators.
It lists all capabilities supported by
the database.

Depending on the supported relation-


al operators, Data Federator man-
ages the queries differently. For ex-
ample if you specify outerjoin=false,
that tells Data Federator Query
Server to execute this operator within
Data Federator Query Server engine.

example:

'isjdbc=true;outer
join=false;rightouter
join=true'

For a list of capabilities, see "bo-df-


reference-jdbc-resources-properties-
capabilites-list-of-capabilities.dita#bo-
df-reference-jdbc-resources-proper-
ties-capabilites-list-of-capabilities-
eim-titan".

Default value defaultFetchSize


This parameter gives the driver a hint
as to the number of rows that should
be fetched from the database when
more rows are needed.

If the value specified is negative, then


the hint is ignored.

The default value is -1.

454 Data Federator User Guide


Configuring connectors to sources of data
Configuring Teradata connectors 12
Type Parameter Description
BOOLEAN ignoreKeys
Set to True if you do not want the
connector to query the data source
to get keys and foreign keys metada-
ta.

BOOLEAN isPasswordEn
crypted
Set to True if the password is encrypt-
ed. The password is defined by the
password parameter.

INTEGER maxConnection
IdleTime
The maximum time an idle connec-
tion is kept in the pool of connections.

Unit is milliseconds.

The default is 60000 ms (60 s).

0 means no limit.

INTEGER nbPreparedState
mentsPerQuery
Maximum number of prepared state-
ments in the query pool.

pre-defined networkLayer
Specifies the network layer of the
value
database that you want to connect
to. When you create a resource,
choose the value that corresponds
to the database to which you are
connecting.

For Teradata, the value should be:


• Teradata

Data Federator User Guide 455


12 Configuring connectors to sources of data
Configuring Teradata connectors

Type Parameter Description


STRING password
Defines the password of the corre-
sponding user.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "password".

separated list schema


Defines the schema names or pat-
of values (se
mi-colon)
terns that you access.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "schema".

You can specify several schemas.


You can also specify wildcards for
schemas.

example:

'T%' = T followed by zero or


more characters
'S_' = S followed by any
single character

BOOLEAN setFetchSize
Defines if the connector should set
the default fetch size.

Predefined val sourceType


Identifies the version of the database.
ue
For Teradata, possible values are:
• Teradata V2 R5
• Teradata V2 R6

456 Data Federator User Guide


Configuring connectors to sources of data
Configuring Teradata connectors 12
Type Parameter Description
Predefined val sqlStringType
Defines the syntax used to generate
ue
the SQL string. This parameter lets
Data Federator Query Server trans-
late the queries expressed in the
Data Federator SQL Syntax to the
syntax specific to the database.

According to the query language of


the database, the possible list of val-
ues is:

sql92, sql99, jdbc3

example:

jdbc3 format:

SELECT * from {oj T1 LEFT OUT


ERJOIN T2 on T1.A1=T2.A2}

SQL92 format:

SELECT * from T1 LEFTOUTERJOIN


T2 ON T1.A1=T2.A2

Predefined val transactionIsola


tion
Attempts to change the transaction
ue
isolation level for connections to the
database. The transactionIsolation
parameter is used by the connector
to set the transaction isolation level
of each connection made to the un-
derlying database.

For details, see transactionIsolation


property on page 478.

Data Federator User Guide 457


12 Configuring connectors to sources of data
Configuring Teradata connectors

Type Parameter Description


STRING user
Defines the username of the
database account.
Note:
This property is a keyword, so you
must enclose it in quotes when using
the ALTER RESOURCE statement,
e.g. "user".

example:

ALTER RESOURCE "jdbc.myre


source" SET "user" 'newuser'

True/Yes supportsboolean
Specifies if the middleware or
False/No
database supports BOOLEANS as
first-class objects. The default value
for this parameter depends on the
database. For pre-defined resources,
this parameter is already set to its
correct value, but you can override
it. Default: No.

458 Data Federator User Guide


Configuring connectors to sources of data
Default values of capabilities in connectors 12
Type Parameter Description
integer maxRows
lets you define the maximum number
of rows you want returned from the
database
(default value 0: no limit)

integer sampleSize
lets you define the maximum number
of rows to return in a random sample
from the database

(Teradata only) (default value 0: no


limit)

boolean allowPartialRe
sults
must be yes to allow partial results if
maxRows is set

(yes/no, true/false, default value is


no)

Default values of capabilities in


connectors
Capabilities information exported by connectors is as shown in the table
below:

Sybase Sybase Informix Netez


Capabilities / Connectors Teradata
ASE ASIQ XPS za

outerJoin false false false false -

leftOuterJoin - false false false -

rightOuterJoin - false false false -

minAggregate - false - - -

Data Federator User Guide 459


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

maxAggregate - false - - -

avgAggregate - false - - -

sumAggregate false - - -

union - - false - -

unionAll - - false - -

countAggregate - - false - -

aggregateDistinct - - false - -

Configuring connectors that use JDBC


Connectors to JDBC sources are the most common type of connectors that
Data Federator uses.

By convention, the names of JDBC connectors start with jdbc.. If you create
a JDBC connector, you should maintain this convention.

In order to be usable from Data Federator Designer, JDBC connectors require


some specific properties. See the list of properties for JDBC resources to
learn which properties are required.

Related Topics
• Managing resources and properties of connectors on page 483
• List of JDBC resource properties on page 461

Pointing a resource to an existing JDBC driver

By default, Data Federator looks for JDBC drivers in the directory data-
federator-install-dir/LeSelect/drivers. You can keep the drivers in
a different directory by changing the path in the resource.

460 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
• In Data Federator Administrator, use the "Connector Resources" tab to
set the driverLocation property to the name of the directory where you
keep your drivers.
You can also use the ALTER RESOURCE statement to set the driverLocation
property.

For example, to set the directory for the oracle9 resource, use the following
statement.

ALTER RESOURCE "jdbc.oracle.oracle9" SET driverLocation


'C:\drivers\ojdbc14.jar'

Related Topics
• Managing resources using Data Federator Administrator on page 483
• Modifying a resource property using SQL on page 493
• List of JDBC resource properties on page 461

List of JDBC resource properties

The table below lists the properties that you can configure in JDBC resources.

Data Federator User Guide 461


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Parameter Description
url
the url to the database

Example

jdbc:oracle:thin:@server.mydo
main.com:1521:ora

jdbcClass
This property is required for JDBC resources to
work in Data Federator Designer.

the class name of the JDBC driver used to connect


to the database

Example

oracle.jdbc.driver.OracleDriver

driverLocation
This property is required for JDBC resources to
work in Data Federator Designer.

the location of the JDBC driver used to connect


to the database

The location is defined as a list of jar files and di-


rectories separated by the system-dependent path
separator (: on UNIX-like systems, ; on Windows).

Example

/usr/local/javaapps/oracle_classes12.zip
or C:\DRIVERS\oracle_classes12.zip

driverProperties
a list of driver properties

Elements are separated by the character ;. Do not


put spaces between the elements.
Example

selectMethod=cursor;connection
RetryCount=2

462 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
Parameter Description
user
the username of the database account

Example

smith

password
the password of the corresponding user account

isPasswordEncrypted
specifies if the password has been encrypted by
Data Federator Designer

authenticationMode
one of: {configuredIdentity, callerImpersonation,
principalMapping}
1. configuredIdentity: Authentication on the
database is done using value of parameters
username and password.
2. callerImpersonation: Authentication on the
database is done using the same credential as
used to connect to Query Server.
3. principalMapping: Authentication on the database
is done using a mapping from the user of the
connector (principal) to a database user account.
In this case, the parameter loginDomain should
be set to a registered login domain.

loginDomain
the name of a login domain

Used only when authenticationMode=principalMap


ping. It identifies the set of credentials to use when
connecting to the underlying database. This login
domain should have been previously registered
in Query Server (using stored procedure
addLoginDomain).

supportsCatalog
specifies if the JDBC driver supports the notion
of catalog

The default is true.

Data Federator User Guide 463


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Parameter Description
escapeIdenti
fierQuoteString
defines the string used to escape the identifier
quote string (as returned by DatabaseMetaDa
ta#getIdentifierQuoteString) when it appears inside
an identifier

By default, this escape string is set to the identifier


quote string itself. If set to (empty), no escape
will be done.

addCatalog
specifies if Data Federator should prefix table
names with the name of the catalog

If supportsCatalog is false, this parameter is ig-


nored.

Possible values are true, yes, false or no. The


default value is false.

supportsSchema
specifies if the JDBC driver supports the notion
of schema

The default value is true.

schema
the schema or schema pattern, or the list of
schema or schema patterns you want to access

Schemas and schema patterns can be separated


with one of the following characters: ,, ; or
(space). If supportsSchema is false, this parame-
ter is ignored. If empty or not present, it defaults
to null (equivalent to the pattern %).

Example

schema="SMITH", schema="SMITH;JOHN",
schema="SM%"

464 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
Parameter Description
addSchema
specifies if Data Federator should prefix tables
with the name of the schema
If supportsSchema is false, this parameter is ig-
nored.

Possible values are true, yes, false or no. The


default value is true if multiple schemas are
specified or the attribute schema is empty (or un-
defined); false otherwise.

showAllTables
specifies if Data Federator should show all tables
from the selected schemas or schema patterns

If showAllTables is false, only the tables that you


selected when defining the datasource are re-
turned.

Possible values are true, yes, false or no. The


default value is true.

ignoreKeys
specifies if the wrapper should not query the JDBC
driver to get key or foreign key metadata

The Sun JDBC-ODBC bridge does not support


such calls, and this option should be set to true.

Possible values are true, yes, false or no. The


default value is false.

supportsBoolean
specifies if the JDBC driver or database does not
support booleans as first class objects

The default value for this parameter depends on


the database. If this is one of the supported source
types, this parameter is already set to its correct
value. However, it can be overridden.

Possible values are true, yes, false or no. The


default value is false.

Data Federator User Guide 465


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Parameter Description
trimTrailingSpaces
specifies if Data Federator should remove extra
spaces from catalog, schema, table, column, key
and foreign key names
Some JDBC drivers return metadata padded with
blank spaces. Setting this parameter to yes will
ensure that extra spaces in catalog, schema, ta-
ble, column, key and foreign key names are re-
moved.

Possible values are true, yes, false or no. The


default value is false.

collationName
Deprecated: use datasourceBinaryCollation in-
stead.

the source collation to use for binary operations

Example

collationName="latin1_bin"

datasourceBinaryColla
tion
the source collation to use for comparisons that
need to be evaluated with a binary collation (like,
not like and function evaluations)

This is used for SQL Server and MySQL to add a


collate clause in queries where the semantics of
binary collation are required. If unset, no collate
clause is generated for these operations.

Unset by default.

Example

datasourceBinaryCollation="Latin1_gener
al_bin"

466 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
Parameter Description
datasourceCompColla
tion
the source collation to use for comparisons (other
than like, not like and function evaluations)
It is used for SQL Server and MySQL to add a
collate clause in queries. If unset, no collate clause
is generated for these operations.

Unset by default.

Example

datasourceCompCollation="Latin1_gener
al_ci_ai"

datasourceSortCollation
the source collation to use for sort operations (or-
der by)

This is used for SQL Server and MySQL to add a


collate clause in queries. If unset, no collate clause
is generated for these operations.

Unset by default.

Example

datasourceSortCollation="Latin1_gener
al_ci_as"

compCollationCompatible
specifies if the collation for comparison operations
in the data source is compatible with the current
setting in Query Server

When set to true, the server can ignore the colla-


tion of comparison operations and predicates can
be safely pushed on the source.

Possible values are true, yes, false or no. The


default value is false.

Example

compCollationCompatible="true"

Data Federator User Guide 467


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Parameter Description
sortCollationCompatible
specifies if the collation for sort operations (order
by) in the data source is compatible with the cur-
rent setting in the Query Server
When set to true, the server can ignore the colla-
tion of sort operations and (order by) expressions
can be safely pushed on the source.

Possible values are true, yes, false or no. The


default value is false.

Example

sortCollationCompatible="true"

sqlStringType
identifies the SQL dialect supported by the
database

one of:
• sql92
• sql99 (reserved for future usage)
• oracle8
• oracle9
• jdbc3 (JDBC syntax is used for outer joins)
• sas
Defaults to the SQL dialect supported by the
source as identified by the parameter sourceType.
If sourceType is undefined, then defaults to sql92.

468 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
Parameter Description
sourceType
identifies the database

one of:
• oracle8R1
• oracle8R2
• oracle8R3
• oracle9R1
• oracle9R2
• sqlserver
• mysql
• db2
• access
• progress
• openedge
• sybase
• teradata
• sas

castColumnType
a list of mappings from the type used by the
database to the type used by JDBC

These should be in the form databasetype=jd


bctype

This is useful when the default mapping done by


the driver is incorrect or incomplete.
Note:
For officially supported databases the type map-
pings are set implicitly, but you can override them.

Example

for Oracle JDBC driver FLOAT=FLOAT;BLOB=BLOB

Data Federator User Guide 469


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Parameter Description
allowTableType
a list of table types to take into consideration when
the metadata of the underlying database is re-
trieved
Elements are separated by the character ;. Do not
put spaces between the elements.

Special case: if this attribute is (empty), all table


types are allowed.

Example

TABLE;SYSTEM TABLE;VIEW

capabilities
a list of all capabilities supported by the database

Elements are separated by the character ;. Do not


put spaces between the elements.

Example

isjdbc=true;outerjoin=false;rightouter
join=true

nbPreparedState
mentsPerQuery
defines the maximum number of statements that
can be used concurrently when executing param-
eterized queries

useParameterInlining
specifies whether the JDBC connector should use
java.sql.PreparedStatement or java.sql.Statemen
tobjects to execute parameterized queries

When set to true, the JDBC connector uses ja


va.sql.Statement objects to execute parameterized
queries. The parameterized query is inlined, re-
placing placeholders with constant values. This
option is useful for JDBC drivers that do not sup-
port well prepared statements (for example
Progress Openedge). The default value is false.

470 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
Parameter Description
transactionIsolation
the transaction isolation level

one of:
• TRANSACTION_READ_COMMITTED
• TRANSACTION_READ_UNCOMMITTED
• TRANSACTION_REPEATABLE_READ
• TRANSACTION_SERIALIZABLE
Default: not set.

defaultFetchSize
the default fetch size to set when creating state-
ment objects

Default: not set.

setFetchForwardDirection
specifies if fetch forward should be explicitly set

Possible values are true, yes, false or no. The


default value is false.

setReadOnly
specifies if connections should not be set to to
read only

Possible values are true, yes, false or no. The


default value is false.

Data Federator User Guide 471


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Parameter Description
sessionProperties
a list of session variables set on the database

Elements are separated by the character ;. Do not


put spaces between the elements.

Example

selectMethod=cursor;connection
RetryCount=2

useIndexInOrderBy
specifies if index (column position) should be used
instead of alias (column name) in the order by
clause of submitted queries

The default value is false (except for databases


which do not handle aliases in order by clause
well).

Example

If we order by column 2 and 3, we will generate


ORDER BY 2, 3 instead of ORDER BY C2, C3.

translationFile
the name of the XML file which contains transla-
tion definitions
The value can be absolute or relative to data-
federator-install-dir. If this parameter is
not specified, the default file will be used.

List of JDBC resource properties for connection pools

The table below lists the properties that you can configure in JDBC resources.

472 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
Parameter Description
maxIdlePools
the maximum number of pools that can be kept
idle
If this value is reached, the oldest unused pool is
closed and removed. 0 means no limit. The default
value is 24.

maxConnections
the maximum number of simultaneous connec-
tions to the underlying database

The value 0 means no limit. The default value is


0.

maxPoolSize
the maximum number of idle (free) connections
to keep in the pool

The value 0 means no limit. The default value is


32.

maxLoadPerConnection
the maximum load authorized for each connection

This value can be used to control the maximum


number of cursors open per connection. 0 means
no limit. The default value is 0.

maxConnectionIdleTime
the maximum time an idle connection is kept in
the pool of connections

Unit is milliseconds. 0 means no limit. The default


value is 60000 (60 seconds).

reaperCycleTime
Deprecated. There is now only one reaper for all
JDBC connector configurations. The system pa-
rameter leselect.core.jdbc.reaperCycleTime can
be used to control how often the connection reaper
should check for idle connections, or bad connec-
tions (those that are suspected to be broken due
to connection failure).

Data Federator User Guide 473


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Parameter Description
connectionTestQuery
the SQL test query that can be used to check if
connections to the underlying database are valid
Caution: this query should be cheap to execute.

Example

An example of a test query for Oracle could be


SELECT 1 FROM DUAL. An empty string means no
test query. The default value is empty.

connectionFailureDetec
tionOnError
a keyword indicating the kind of connection failure
detection that should be done when an SQLExcep
tion occurs
• sqlState: specifies that failure detection should
be done using SQLState codes

SQLState codes for connection failures have


a 2 character class 08. Some JDBC drivers do
not comply with this standard; in this case, the
parameter connectionFailureSQLStates can
be used to specify the list of all specific SQL
State codes returned by the driver.
• testQuery: specifies that failure detection
should be done using the test query defined
by parameter connectionTestQuery

The default value is sqlState.

connectionFailureSQL
States

474 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
Parameter Description

the list of specific SQLState codes that can be


used to detect a connection failure when an
SQLException is thrown by the underlying
database

Standard codes for connection failures (starting


with the two character class 08) do not need to
be specified here. An example of specific code
for Oracle can be 61000(ORA-00028: your
session has been killed). Elements are sep-
arated by the character ;. Do not put spaces be-
tween the elements. The default value is empty.

Example

61000

Related Topics
• List of JDBC resource properties on page 461

List of common JDBC classes

The table below lists the most common JDBC driver classes and the syntax.
You can use these when configuring the JDBC resource property named
jdbcClass.

Database name JDBC attributes


Access
sun.jdbc.odbc.JdbcOdbcDriver

DB2
com.ibm.db2.jcc.DB2Driver

MySQL
com.mysql.jdbc.Driver

Oracle
oracle.jdbc.driver.OracleDriver

Data Federator User Guide 475


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Database name JDBC attributes


Progress
com.ddtek.jdbc.sequelink.Se
queLinkDriver

SAS
com.sas.net.sharenet.ShareNet
Driver

SQLServer
com.microsoft.jdbc.sqlserv
er.SQLServerDriver

SQLServer 2005
com.microsoft.sqlserver.jd
bc.SQLServerDriver

Related Topics
• List of JDBC resource properties on page 461

List of pre-defined JDBC URL templates

The table below defines the default values used by the pre-defined databases.
You can use these when configuring the JDBC resource property named
urlTemplate.

476 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors that use JDBC 12
Database name URLTEMPLATE
Access
jd
bc:odbc:<ODBC_DSN>

DB2
jdbc:db2://host
name[:port]/database
name

MySQL
jdbc:mysql://host
name[:port]/database
name

Oracle 8
jdbc:ora
cle:thin:@host
name[:port]:database
name

Oracle 9
jdbc:ora
cle:thin:@host
name[:port]:database
name

Oracle 10
jdbc:ora
cle:thin:@host
name[:port]:database
name

Progress
jdbc:se
quelink://host
name[:port];server
DataSource=se
quelinkdatasource
name

Data Federator User Guide 477


12 Configuring connectors to sources of data
Configuring connectors that use JDBC

Database name URLTEMPLATE


SAS
jd
bc:sharenet://host
name:port

SQLServer 2000
jdbc:mi
crosoft:sqlserv
er://host
name[:port];database
name=databasename

SQLServer 2005
jdbc:sqlserv
er://host
name[:port];database
name=databasename

Related Topics
• List of JDBC resource properties on page 461

transactionIsolation property

Attempts to change the transaction isolation level for connections to the


database and sets the transaction isolation level. The 'transactionIsolation'
parameter is used by the JdbcWrapper (JDBC Connector) to set the
transaction isolation level of each connection made to the underlying
database.

The exact meaning of each level is defined in the JDBC specification.

Predefined value

Expected values: {TRANSACTION_READ_COMMITTED |


TRANSACTION_READ_UNCOMMITTED |
TRANSACTION_REPEATABLE_READ | TRANSACTION_SERIALIZABLE}

478 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors to web services 12
Example: Improving performance using transactionIsolation
Use the transactionIsolation parameter to enhance the performance of the
connector if a database supports the setting of such levels. If you do not
want to see dirty reads, non-repeatable reads and phantom reads, set the
transaction isolation to TRANSACTION_READ_UNCOMMITED. You can
expect better performance.

urlTemplate

This property defines the template of the JDBC URL used to connect to the
database.

This value is a hint. Data Federator Designer shows the value of this property
while adding a datasource, and users of Data Federator Designer complete
the remaining values.

For example, if the value of urlTemplate is jdbc:hostname, users of Data


Federator Designer must replace hostname by the correct host.

String

Example: urlTemplate property


jdbc:oracle:thin:@server.mydomain.com:1521:mydatabase

Depending on the database, the URL follows a specific template. The


template is a character sequence with or without variables.

For the URL templates of common JDBC resources, see List of pre-defined
JDBC URL templates on page 476.

Configuring connectors to web services


You can configure the web service using the properties in the web service
resource. The web service resource is called ws.generic.

Data Federator User Guide 479


12 Configuring connectors to sources of data
Configuring connectors to web services

You can configure this resource as you would configure any resource in Data
Federator Administrator.

Note:
Any changes that you make to the resource ws.generic will apply to all web
services that are deployed on your installation of Data Federator Query
Server.

Related Topics
• Creating and configuring a resource using Data Federator Administrator
on page 486
• List of resource properties for web service connectors on page 480

List of resource properties for web service connectors

The table below lists the properties that you can configure in web service
resources.

480 Data Federator User Guide


Configuring connectors to sources of data
Configuring connectors to web services 12
Parameter Default value Description
addNamespaceInParam true
eter

Data Federator User Guide 481


12 Configuring connectors to sources of data
Configuring connectors to web services

Parameter Default value Description

By default, Data Federa-


tor decides if it should
add namespaces before
parameters in SOAP re-
quests.

You can use the param-


eter addNamespaceIn
Parameter to deactivate
the default behavior and
force Data Federator to
suppress namespaces.

For example, if you


have the following
SOAP body with the
single parameter Sym
bol:

<soapenv:Body>
<ns:GetQuote
xmlns:ns="http://www.xig
nite.com/services/">

<ns:Sym
bol>SAP</ns:Symbol>
</ns:GetQuote>
</soapenv:Body>

Then if addNames
paceInParameter is
true, Data Federator de-
tects if the web service
expects a namespace.
If so, it generates the
line containing the pa-
rameter as:

<ns:Sym
bol>SAP</ns:Symbol>

If addNamespaceInPa

482 Data Federator User Guide


Configuring connectors to sources of data
Managing resources and properties of connectors 12
Parameter Default value Description
rameter is false, Data
Federator will generate
the line containing the
parameter as:

<Symbol>SAP</Symbol>

Managing resources and properties of


connectors
Resources are used for configuring connectors in a flexible way. A resource
is a set of properties with a name. The name usually corresponds to a
connector for which the resource contains properties, like jdbc.sas, or jd
bc.mysql.

You can set the properties of connectors by using one of the pre-defined
resources. Resources let you re-use the same set of properties for different
connectors.

You can also make your own resources. When you make a resource, you
define properties, the values of the properties, and then you choose a name
for your set of properties. See the documentation on managing resources
and properties for details.

Note:
In order to use a resource to make a connection, you must install drivers for
your sources of data.

Managing resources using Data Federator


Administrator

Data Federator Administrator provides a window where you manage


resources.

Data Federator User Guide 483


12 Configuring connectors to sources of data
Managing resources and properties of connectors

To access the Data Federator Administrator interface for managing resources,


login to Data Federator Administrator, click the Administration tab, then
click Connector Settings.

You must be logged in with an administrator user account to change accounts


of other users. If you log in as a regular user, you can only change your own
information.

The following table summarizes how to manage resources on the Connector


Resources tab.

484 Data Federator User Guide


Configuring connectors to sources of data
Managing resources and properties of connectors 12
Table 12-8: Connector resources tab summary

Task Actions

Create a resource

Delete a resource

Copy a resource

Add a property to a resource Click Add a property.

Delete a property

Impact of modifications to a resource on a configured connector


When properties of a resource are modified, the changes are made visible
the next time the datasource is accessed. This allows you to update an
existing connector configuration dynamically and in a flexible way.

The values that users define when adding a datasource in Data Federator
Designer override the values defined in the resource.

Data Federator User Guide 485


12 Configuring connectors to sources of data
Managing resources and properties of connectors

Related Topics
• Data Federator Administrator overview on page 384
• Creating and configuring a resource using Data Federator Administrator
on page 486
• Copying a resource using Data Federator Administrator on page 488

Valid names for resources

Your resource name must begin with a letter [a-zA-Z]. The characters that
follow can be any number of alphanumeric [a-zA-Z0-9], dot . and
underscore _, in any order, but each dot must be immediately followed by
an alphanumeric or underscore.

A resource name must start with a prefix that identifies that type of the
resource.

Available prefixes are:


• jdbc
• odbc
• openclient
• ws

Example: Valid names for resources


• jdbc.My.Resource1
• jdbc.My_Resource1_
• odbc.My_Resource___for___My_Database
• odbcMy.Resource.for.My.Database

Creating and configuring a resource using Data


Federator Administrator

You can create a new, empty resource, and then add the parameters that
you require.

486 Data Federator User Guide


Configuring connectors to sources of data
Managing resources and properties of connectors 12
1. Login to Data Federator Administrator.
The Data Federator Administrator screen is displayed.
2. At the top of the screen, click the Administration tab.
The Administration panel appears. At the left of the panel, the list of
administration options appears.
3. In the list of administration options, click the Connector Settings option.
The "Resource" panel appears.
4. At the right of the Resource pull-down list, click the New icon

.
A dialog box appears, prompting you to enter a name for your new
resource.
5. Enter a name for your resource and click OK.
The dialog box closes.
6. Below the Property Name heading, click Add a property.
7. From the Property Name pull-down list, select a property to add, and in
the Property Value field, enter a value for the property. Click OK to add
the property.
Data Federator Administrator validates your entry and displays an error
message if it is invalid for the property.
8. Repeat the process to add the properties that you want.
In order to be usable from Data Federator Designer, JDBC connectors
require some specific properties. See the list of properties for JDBC
resources to learn which properties are required.

When you finish, the new resource that you created is available to use in
your Data Federator Designer projects.

Related Topics
• Managing resources using Data Federator Administrator on page 483
• Valid names for resources on page 486
• List of JDBC resource properties on page 461

Data Federator User Guide 487


12 Configuring connectors to sources of data
Managing resources and properties of connectors

Copying a resource using Data Federator


Administrator

You can create a new resource by copying an existing resource, and then
adding and modifying the parameters that you require.
1. Login to Data Federator Administrator.
The Data Federator Administrator screen is displayed.
2. At the top of the screen, click the Administration tab.
The Administration panel is displayed. At the left of the panel, the list
of administration options is displayed.
3. In the list of administration options, click the Connector Settings option.
The "Resource" panel is displayed.
4. In the Resource pull-down list, select the resource to copy, and click the
Copy icon

.
A dialog box appears, prompting you to enter a name for the new copy.
5. Enter a name for the resource copy and click OK.
The properties configured in the copied resource are displayed.
6. Add, edit, and delete properties as required to configure the new resource.
When you finish editing the properties, click OK.
7. Repeat the process to add and modify the properties that you want.
In order to be usable from Data Federator Designer, JDBC connectors
require some specific properties. See the list of properties for JDBC
resources to learn which properties are required.

When you finish, the new resource that you created is available to use in
your Data Federator Designer projects.

Related Topics
• Managing resources using Data Federator Administrator on page 483
• Valid names for resources on page 486

488 Data Federator User Guide


Configuring connectors to sources of data
Managing resources and properties of connectors 12
• List of JDBC resource properties on page 461

List of pre-defined resources

Table 12-9: Pre-defined resources

Data Federator delivers a set of pre-defined resources for the most popular
databases. This table lists the names of those that are available with the
installation.

Type of driver or mid-


Name of source Name of resource
dleware

Access JDBC jdbc.access

DB2 JDBC jdbc.db2

DB2 iSeries (as400) JDBC jdbc.db2.iSeries

DB2 zSeries (os390) JDBC jdbc.db2.zSeries

odbc.informix.in-
formixXPS85
IBM Informix XPS ODBC
odbc.informix.in-
formixXPS84

MySQL JDBC jdbc.mysql

Netezza ODBC odbc.netezza

Oracle 8 JDBC jdbc.oracle.oracle8

Data Federator User Guide 489


12 Configuring connectors to sources of data
Managing resources and properties of connectors

Type of driver or mid-


Name of source Name of resource
dleware

Oracle 9 JDBC jdbc.oracle.oracle9

Oracle 10 JDBC jdbc.oracle.oracle10

Progress through JDBC


Progress OpenEdge jdbc.progress.openedge
to ODBC bridge

JDBC through