Sie sind auf Seite 1von 130

DataDirect Connect Series for ADO.NET Reference Release 4.0.

2012 Progress Software Corporation and/or its subsidiaries or affiliates. All rights reserved. These materials and all Progress software products are copyrighted and all rights are reserved by Progress Software Corporation. The information in these materials is subject to change without notice, and Progress Software Corporation assumes no responsibility for any errors that may appear therein. The references in these materials to specific platforms supported are subject to change. Actional, Apama, Artix, Business Empowerment, Business Making Progress, Corticon, Corticon (and design), DataDirect (and design), DataDirect Connect, DataDirect Connect64, DataDirect Technologies, DataDirect XML Converters, DataDirect XQuery, DataXtend, Dynamic Routing Architecture, Empowerment Center, Fathom, Fuse Mediation Router, Fuse Message Broker, Fuse Services Framework, IONA, Making Software Work Together, Mindreef, ObjectStore, OpenEdge, Orbix, PeerDirect, Powered by Progress, PowerTier, Progress, Progress DataXtend, Progress Dynamics, Progress Business Empowerment, Progress Empowerment Center, Progress Empowerment Program, Progress OpenEdge, Progress Profiles, Progress Results, Progress Software Business Making Progress, Progress Software Developers Network, Progress Sonic, ProVision, PS Select, RulesCloud, RulesWorld, Savvion, SequeLink, Shadow, SOAPscope, SOAPStation, Sonic, Sonic ESB, SonicMQ, Sonic Orchestration Server, SpeedScript, Stylus Studio, Technical Empowerment, WebSpeed, Xcalia (and design), and Your Software, Our Technology-Experience the Connection are registered trademarks of Progress Software Corporation or one of its affiliates or subsidiaries in the U.S. and/or other countries. AccelEvent, Apama Dashboard Studio, Apama Event Manager, Apama Event Modeler, Apama Event Store, Apama Risk Firewall, AppsAlive, AppServer, ASPen, ASP-in-a-Box, BusinessEdge, Cache-Forward, CloudEdge, DataDirect Spy, DataDirect SupportLink, Fuse, FuseSource, Future Proof, GVAC, High Performance Integration, ObjectStore Inspector, ObjectStore Performance Expert, OpenAccess, Orbacus, Pantero, POSSE, ProDataSet, Progress Arcade, Progress CloudEdge, Progress Cloudware, Progress Control Tower, Progress ESP Event Manager, Progress ESP Event Modeler, Progress Event Engine, Progress RFID, Progress RPM, Progress Responsive Cloud, Progress Responsive Process Management, Progress Software, PSE Pro, SectorAlliance, SeeThinkAct, Shadow z/Services, Shadow z/Direct, Shadow z/Events, Shadow z/Presentation, Shadow Studio, SmartBrowser, SmartComponent, SmartDataBrowser, SmartDataObjects, SmartDataView, SmartDialog, SmartFolder, SmartFrame, SmartObjects, SmartPanel, SmartQuery, SmartViewer, SmartWindow, Sonic Business Integration Suite, Sonic Process Manager, Sonic Collaboration Server, Sonic Continuous Availability Architecture, Sonic Database Service, Sonic Workbench, Sonic XML Server, The Brains Behind BAM, WebClient, and Who Makes Progress are trademarks or service marks of Progress Software Corporation and/or its subsidiaries or affiliates in the U.S. and other countries. Java is a registered trademark of Oracle and/or its affiliates. Any other marks contained herein may be trademarks of their respective owners.

Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Using This Book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Typographical Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . About the Product Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HTML Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Compiled Help File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PDF Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contacting Customer Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7
7 8 9 9 10 10 11

Using SQL Escape Sequences in .NET Applications . . . . . . . . . . . . 13


Date, Time, and Timestamp Escapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scalar Function Escape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stored Procedure Escape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Outer Join Escape Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQL Extension Escape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 14 18 18 19

Locking and Isolation Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21


Locking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Locking Modes and Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Isolation Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 21 22

Using Your Data Provider with the ADO.NET Entity Framework . . . 25


Using the Database First Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Model First . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the Code First Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Stored Procedures to Provide Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . Extending Entity Framework Functionality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Logging Application Block Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enhancing ADO.NET Entity Framework Performance . . . . . . . . . . . . . . . . . . . . . . . . . Obtaining Connection Statistics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementing Reauthentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model First and Code First Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the Performance Tuning Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mapping EDM Canonical Functions to Data Source Functions . . . . . . . . . . . . . . . . . . . Implementation Differences for the ADO.NET Entity Framework Provider . . . . . . . . . . For More Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 31 38 39 40 40 41 42 42 43 44 44 44 45 45

DataDirect Connect Series for ADO.NET Reference

Table of Contents

Using the Microsoft Enterprise Library . . . . . . . . . . . . . . . . . . . . . . .


Data Access Application Block Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . When Should You Use DAABs? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Should You Use Generic or Database-specific Classes? . . . . . . . . . . . . . . . . . . . . Configuring the DAAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the Data Access Application Block in Application Code . . . . . . . . . . . . . . . . Using the DAAB Classes with Enterprise Library Version 4.1. . . . . . . . . . . . . . . . . Logging Application Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . When Should You Use the LAB? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring the LAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the LAB in Application Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Different Versions of the Logging Application Block . . . . . . . . . . . . . . . . . . . For More Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47
47 47 48 48 53 53 54 54 54 58 60 60

Getting Schema Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Columns Returned by the GetSchemaTable Method . . . . . . . . . . . . . . . . . . . . . . . . . . Retrieving Schema Metadata with the GetSchema Method . . . . . . . . . . . . . . . . . . . . . MetaDataCollections Schema Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DataSourceInformation Schema Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DataTypes Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ReservedWords Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Restrictions Collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Additional Schema Metadata Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Catalogs Schema Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Columns Schema Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ForeignKeys Schema Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Indexes Schema Collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PrimaryKeys Schema Collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ProcedureParameters Schema Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Procedures Schema Collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Schemata Schema Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tables Schema Collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TablePrivileges Schema Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Views Schema Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61
61 63 63 64 65 67 67 68 68 69 71 73 76 77 79 80 81 82 83

Client Information for Connections . . . . . . . . . . . . . . . . . . . . . . . . . .


How Databases Store Client Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Storing Client Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 Workload Manager (WLM) Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 V9.5 for Linux/UNIX/Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85
85 86 87 87 88

DataDirect Connect Series for ADO.NET Reference

Table of Contents

Designing .NET Applications for Performance Optimization . . . . . . 89


Simplifying Automatically-generated SQL Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reviewing SQL Queries Created by Visual Studio Wizards . . . . . . . . . . . . . . . . . . Avoiding the CommandBuilder Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Designing .NET Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Connection Pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Opening and Closing Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementing Reauthentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Commits in Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Choosing the Right Transaction Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Commands Multiple Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Statement Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Parameter Markers as Arguments to Stored Procedures . . . . . . . . . . . . . . . Choosing Between a DataSet and a DataReader . . . . . . . . . . . . . . . . . . . . . . . . . . Using Native Managed Providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Retrieving Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Retrieving Long Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reducing the Size of Data Retrieved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Commands that Retrieve Little or No Data . . . . . . . . . . . . . . . . . . . . . . . . . . Choosing the Right Data Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 90 91 91 91 92 93 93 94 94 94 95 96 96 97 97 98 98 99

Updating Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Synchronizing Changes Back to the Data Source . . . . . . . . . . . . . . . . . . . . . . . . . 100

8 A B

Using ClickOnce Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101


Deploying the Data Provider with Your Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

Using an .edmx File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Using Enterprise Library 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111


Data Access Application Block Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Configuring the DAAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Using the Data Access Application Block in Application Code . . . . . . . . . . . . . . . . 114 Logging Application Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . When Should You Use the LAB?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring the LAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding a New Logging Application Block Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the LAB in Application Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 115 115 118 119

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

DataDirect Connect Series for ADO.NET Reference

Table of Contents

DataDirect Connect Series for ADO.NET Reference

Preface
This book provides reference information for using Progress DataDirect Connect Series for ADO.NET.

Using This Book


This book assumes that you are familiar with your operating system and its commands, the definition of directories, and accessing a database through an end-user application. This book contains the following information:

Chapter 1 Using SQL Escape Sequences in .NET Applications on page 13 describes the scalar functions that are supported by the DataDirect Connect for ADO.NET data providers. Your data store may not support all of these functions. Chapter 2 Locking and Isolation Levels on page 21 discusses locking and isolation levels and how their settings can affect the data you retrieve. Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework on page 25 describes how to create a model for the DataDirect ADO.NET Entity Framework data providers. Chapter 4 Using the Microsoft Enterprise Library on page 47 describes how to configure the Data Access Application Block and Logging Application Block, and how to use them in your application code. Chapter 5 Getting Schema Information on page 61 describes the columns that are returned by the GetSchemaTable Method and how to retrieve schema metadata with the GetSchema method. Chapter 6 Client Information for Connections on page 85 describes how you can use extensions provided by Progress DataDirect to store and return client information for a connection. Chapter 7 Designing .NET Applications for Performance Optimization on page 89 provides recommendations for improving the performance of your applications by optimizing their code. Chapter 8 Using ClickOnce Deployment on page 101 describes how you can deploy your Windows Forms application and a DataDirect Connect for ADO.NET data provider from a Web server. Appendix A Using an .edmx File on page 105 explains the necessary changes to an .edmx file in order to provide Extended Entity Framework functionality to the Entity Data Model (EDM) layer. Appendix B Using Enterprise Library 4.1 on page 111 provides configuration information for using the data providers with Microsoft Enterprise Library Version 4.1.

In addition, a "Glossary" on page 121 helps you with terminology referenced in this book. DataDirect Connect Series for ADO.NET Reference

Preface NOTE: This book refers the reader to Web pages using URLs for more information about specific topics, including Web pages not maintained by Progress DataDirect. Because it is the nature of Web content to change frequently, Progress DataDirect can guarantee only that the URLs referenced in this book were correct at the time of publishing.

Typographical Conventions
This book uses the following typographical conventions: Convention italics bold UPPERCASE Explanation Introduces new terms with which you may not be familiar, and is used occasionally for emphasis. Emphasizes important information. Also indicates button, menu, and icon names on which you can act. For example, click Next. Indicates keys or key combinations that you can use. For example, press the ENTER key. Also used for SQL reserved words. monospace monospaced italics forward slash / Indicates syntax examples, values that you specify, or results that you receive. Indicates names that are placeholders for values that you specify. For example,filename. Separates menus and their associated commands. For example, Select File / Copy means that you should select Copy from the File menu. The slash also separates directory levels when specifying locations under UNIX. vertical rule | brackets [ ] Indicates an "OR" separator used to delineate items. Indicates optional items. For example, in the following statement: SELECT [DISTINCT], DISTINCT is an optional keyword. Also indicates sections of the Windows Registry. braces { } ellipsis . . . Indicates that you must select one item. For example, {yes | no} means that you must specify either yes or no. Indicates that the immediately preceding item can be repeated any number of times in succession. An ellipsis following a closing bracket indicates that all information in that unit can be repeated.

DataDirect Connect Series for ADO.NET Reference

About the Product Documentation

About the Product Documentation


The product library consists of the following books:

DataDirect Connect Series for ADO.NET Installation Guide details requirements and procedures for installing DataDirect Connect for ADO.NET. DataDirect Connect Series for ADO.NET Users Guide provides provides information about configuring and using the product. DataDirect Connect Series for ADO.NET Reference provides detailed reference information about the product. DataDirect Connect Series for ADO.NET Troubleshooting Guide provides information about error messages and troubleshooting procedures for the product.

HTML Version
The product library, except for the installation guide, is placed on your system as HTML-based online help during a normal installation of the product. The help system is located in the help subdirectory of the product installation directory. To use the help, you must have one of the following Internet browser installed:

Internet Explorer 5.x, 6.x, 7.x, 8.x, and 9x Mozilla Firefox 1.x, 2.x, 3.x, and 8.0 Netscape 4.x, 7.x and 8.x Safari 1.x, 2.x, 3.x , and 5.1.2 Opera 7.54u2, 8.x, and 9.x

On Windows, you can access the entire help system by selecting the help icon that appears in the DataDirect Connect for ADO.NET program group. On all platforms, you can access the entire help system by opening the following file from within your browser: install_dir/dotnethelp/help.htm where install_dir is the path to the product installation directory. After the browser opens, the left pane displays the Table of Contents, Index, and Search tabs for the entire documentation library. When you have opened the main screen of the help system in your browser, you can bookmark it in the browser for quick access later. NOTE: Security features set in your browser can prevent the help system from launching. A security warning message is displayed. Often, the warning message provides instructions for unblocking the help system for the current session. To allow the help system to launch without encountering a security warning message, the security settings in your browser can be modified. Check with your system administrator before disabling any security features.

DataDirect Connect Series for ADO.NET Reference

10

Preface

Compiled Help File


A compiled help file (.CHM) is placed on your system during a normal installation of the product. It is located in the help subdirectory of the product installation directory. The product program group contains an icon for launching the help system. To access help from a command-line environment, at a command prompt, enter: install_dir/help/DataDirect_Connect_for_ADONET_Help.chm where install_dir is the path to your product installation directory.

PDF Version
The product documentation is also provided in PDF format. You can view or print the documentation, and perform text searches in the files. The PDF documentation is available on the Progress DataDirect Web site at: http://www.datadirect.com/support/product-info/documentation/by-product.html You can download the entire library in a compressed file. When you uncompress the file, it appears in the correct directory structure. Maintaining the correct directory structure allows cross-book text searches and cross-references. If you download or copy the books individually outside of their normal directory structure, their cross-book search indexes and hyperlinked cross-references to other volumes will not work. You can view a book individually, but it will not automatically open other books to which it has cross-references. To help you navigate through the library, a file, called books.pdf, is provided. This file lists each online book provided for the product. We recommend that you open this file first and, from this file, open the book you want to view. NOTE: To use the cross-book search feature, you must use Adobe Reader 8.0 or higher. If you are using a version of Adobe Reader that does not support the cross book search feature or are using a version of Adobe Reader earlier than 8.0, you can still view the books and use the Find feature within a single book.

DataDirect Connect Series for ADO.NET Reference

Contacting Customer Support

11

Contacting Customer Support


Progress DataDirect offers a variety of options to meet your customer support needs. Please visit our Web site for more details and for contact information: http://www.datadirect.com/support/index.html The Progress DataDirect Web site provides the latest support information through our global service network. The SupportLink program provides access to support contact details, tools, patches, and valuable information, including a list of FAQs for each product. In addition, you can search our Knowledgebase for technical bulletins and other information. When you contact us for assistance, please provide the following information:

Your customer number or the serial number that corresponds to the product for which you are seeking support, or a case number if you have been provided one for your issue. If you do not have a SupportLink contract, the SupportLink representative assisting you will connect you with our Sales team. Your name, phone number, email address, and organization. For a first-time call, you may be asked for full customer information, including location. The Progress DataDirect product and the version that you are using. The type and version of the operating system where you have installed your product. Any database, database version, third-party software, or other environment information required to understand the problem. A brief description of the problem, including, but not limited to, any error messages you have received, what steps you followed prior to the initial occurrence of the problem, any trace logs capturing the issue, and so on. Depending on the complexity of the problem, you may be asked to submit an example or reproducible application so that the issue can be re-created. A description of what you have attempted to resolve the issue. If you have researched your issue on Web search engines, our Knowledgebase, or have tested additional configurations, applications, or other vendor products, you will want to carefully note everything you have already attempted. A simple assessment of how the severity of the issue is impacting your organization.

June 2012, Release 4.0.0 of DataDirect Connect for ADO.NET, version 0000

DataDirect Connect Series for ADO.NET Reference

12

Preface

DataDirect Connect Series for ADO.NET Reference

13

Using SQL Escape Sequences in .NET Applications


A number of language features, such as outer joins and scalar function calls, are commonly implemented by DBMSs. The syntax for these features is often DBMS-specific, even when a standard syntax has been defined. The DataDirect Connect for ADO.NET data providers support escape sequences that contain standard syntaxes for the following language features:

Date, time, and timestamp literals Scalar functions such as numeric, string, and data type conversion functions Stored procedures Outer joins SQL extension

The data providers recognize and parse the escape sequence, replacing the escape sequences with data store-specific grammar.

Date, Time, and Timestamp Escapes


The escape sequence for date, time, and timestamp literals is: {literal-type 'value'} where literal-type is one of the following: literal-type d t ts Example: UPDATE Orders SET OpenDate={d '1997-01-29'} WHERE OrderID=1023 Description Date Time Timestamp Value Format yyyy-mm-dd hh:mm:ss [1] yyyy-mm-dd hh:mm:ss[.f...]

DataDirect Connect Series for ADO.NET Reference

14

Chapter 1 Using SQL Escape Sequences in .NET Applications

Scalar Function Escape


You can use scalar functions in SQL statements with the following syntax: {fn scalar-function} where scalar-function is a scalar function supported by the DataDirect Connect for ADO.NET data providers, as listed in Table 1-1. Example: SELECT {fn UCASE(name)} FROM emp Table 1-1. Scalar Functions Supported Data Store DB2 for iSeries String Functions CHAR _LENGTH CHARACTER_LENGTH CONCAT DIFFERENCE LCASE LEFT LENGTH LOCATE LTRIM POSITION RTRIM SOUNDEX SPACE SUBSTRING UCASE Numeric Functions ABS or ABSVAL ACOS ASIN ATAN ATAN2 BIGINT CEILING or CEIL COS COT DECIMAL DEGREES DIGITS DOUBLE EXP FLOAT FLOOR INTEGER LN LOG LOG10 MOD POWER RADIANS RAND REAL ROUND SIGN SIN SMALLINT SQRT TAN TRUNCATE Timedate Functions CURDATE CURTIME CURRENT_DATE CURRENT_TIME CURRENT_TIMESTAMP DAYNAME DAYOFMONTH DAYOFWEEK DAYOFYEAR HOUR MINUTE MONTH MONTHNAME NOW QUARTER SECOND WEEK YEAR System Functions DATABASE NULLIF USER

DataDirect Connect Series for ADO.NET Reference

Scalar Function Escape

15

Table 1-1. Scalar Functions Supported (cont.) Data Store DB2 for z/OS String Functions CHAR _LENGTH CHARACTER _LENGTH CONCAT INSERT LCASE LEFT LENGTH LOCATE LTRIM POSITION REPEAT REPLACE RIGHT RTRIM SPACE SUBSTRING UCASE Numeric Functions ABS or ABSVAL ACOS ASIN ATAN ATAN2 BIGINT CEILING or CEIL COS COT DECIMAL DEGREES DIGITS DOUBLE EXP FLOAT FLOOR INTEGER LN LOG LOG10 MOD POWER RADIANS RAND REAL ROUND SIGN SIN SMALLINT SQRT TAN TRUNCATE Timedate Functions CURDATE CURTIME CURRENT_ DATE CURRENT_TIME CURRENT_TIMESTAMP DAYNAME DAYOFMONTH DAYOFWEEK DAYOFYEAR HOUR MINUTE MONTH MONTHNAME NOW QUARTER SECOND WEEK YEAR System Functions DATABASE NULLIF USER

DataDirect Connect Series for ADO.NET Reference

16

Chapter 1 Using SQL Escape Sequences in .NET Applications

Table 1-1. Scalar Functions Supported (cont.) Data Store DB2 for Linux/UNIX/ Windows String Functions ASCII CHAR CHAR _LENGTH CHARACTER _LENGTH CONCAT DIFFERENCE INSERT LCASE LEFT LENGTH LOCATE LTRIM POSITION REPEAT REPLACE RIGHT RTRIM SOUNDEX SPACE SUBSTRING UCASE Numeric Functions ABS or ABSVAL ACOS ASIN ATAN ATAN2 BIGINT CEILING or CEIL COS COT DECIMAL DEGREES DIGITS DOUBLE EXP FLOAT FLOOR INTEGER LN LOG LOG10 MOD POWER RADIANS RAND REAL ROUND SIGN SIN SMALLINT SQRT TAN TRUNCATE ABS ACOS ASIN ATAN ATAN2 CEILING COS COT EXP FLOOR LOG LOG10 MOD PI POWER ROUND SIGN SIN SQRT TAN TRUNCATE Timedate Functions CURDATE CURTIME CURRENT_DATE CURRENT_TIME CURRENT_TIMESTAMP DAYNAME DAYOFWEEK DAYOFYEAR HOUR MINUTE MONTH MONTHNAME NOW QUARTER SECOND WEEK YEAR System Functions DATABASE NULLIF USER

Oracle

ASCII BIT_LENGTH CHAR CONCAT INSERT LCASE LEFT LENGTH LOCATE LOCATE2 LTRIM OCTET_LENGTH REPEAT REPLACE RIGHT RTRIM SOUNDEX SPACE SUBSTRING UCASE

CURDATE DAYNAME DAYOFMONTH DAYOFWEEK DAYOFYEAR HOUR MINUTE MONTH MONTHNAME NOW QUARTER SECOND WEEK YEAR

IFNULL USER

DataDirect Connect Series for ADO.NET Reference

Scalar Function Escape

17

Table 1-1. Scalar Functions Supported (cont.) Data Store SQL Server String Functions ASCII BIT_LENGTH CHAR CONCAT DIFFERENCE INSERT LCASE LEFT LENGTH LOCATE LTRIM OCTET_LENGTH REPEAT REPLACE RIGHT RTRIM SOUNDEX SPACE SUBSTRING UCASE Numeric Functions ABS ACOS ASIN ATAN ATAN2 CEILING COS COT DEGREES EXP FLOOR LOG LOG10 MOD PI POWER RADIANS RAND ROUND SIGN SIN SQRT TAN TRUNCATE ABS ACOS ASIN ATAN ATAN2 CEILING COS COT DEGREES EXP FLOOR LOG LOG10 MOD PI POWER RADIANS RAND ROUND SIGN SIN SQRT TAN Timedate Functions CURDATE CURTIME CURRENT_DATE CURRENT_TIME CURRENT_TIMESTAMP DAYNAME DAYOFMONTH DAYOFWEEK DAYOFYEAR EXTRACT HOUR MINUTE MONTH MONTHNAME NOW QUARTER SECOND TIMESTAMPADD TIMESTAMPDIFF WEEK YEAR System Functions CONVERT DATABASE IFNULL USER

Sybase

ASCII CHAR CONCAT DIFFERENCE INSERT LCASE LEFT LENGTH LOCATE LTRIM REPEAT RIGHT RTRIM SOUNDEX SPACE SUBSTRING UCASE

DAYNAME DAYOFMONTH DAYOFWEEK DAYOFYEAR HOUR MINUTE MONTH MONTHNAME NOW QUARTER SECOND TIMESTAMPADD TIMESTAMPDIFF WEEK YEAR

DATABASE IFNULL USER

DataDirect Connect Series for ADO.NET Reference

18

Chapter 1 Using SQL Escape Sequences in .NET Applications

Stored Procedure Escape


A stored procedure is an executable object that is stored in the data store. Generally, it is one or more SQL statements that have been precompiled. The escape sequence for calling a procedure is: {[?=]call procedure-name[(parameter[,parameter]...)]} where: procedure-name specifies the name of a stored procedure. parameter specifies a stored procedure parameter. The data provider translates the escape to the underlying database's format for executing a stored procedure when both of the following conditions are true:

The CommandType property of the data providers Command object is set to either CommandType.StoredProcedure or to CommandType.Text. The Text property of the Command object conforms to the defined escape syntax.

NOTE: Using a stored procedure escape does not change the existing behavior of CommandType.StoredProcedure (that is, if Command.Text is set only to the procedure name). It only adds to the existing support for calling stored procedures.

Outer Join Escape Sequence


The data providers support the SQL92 left, right, and full outer join syntax. The escape sequence for outer joins is: {oj outer-join} where outer-join is: table-reference {LEFT | RIGHT | FULL} OUTER JOIN {table-reference | outer-join} ON search-condition where: table-reference is a table name. search-condition is the join condition that you want to use for the tables. Example: SELECT Customers.CustID, Customers.Name, Orders.OrderID, Orders.Status FROM {oj Customers LEFT OUTER JOIN Orders ON Customers.CustID=Orders.CustID} WHERE Orders.Status='OPEN'

DataDirect Connect Series for ADO.NET Reference

SQL Extension Escape Table 1-2 lists the outer join escape sequences that the data providers support.

19

Table 1-2. Outer Join Escape Sequences Supported Data Store DB2 Outer Join Escape Sequences Left outer joins Right outer joins Full outer joins Left outer joins Right outer joins Nested outer joins Left outer joins Right outer joins Full outer joins Left outer joins Right outer joins Nested outer joins

Oracle

SQL Server

Sybase

SQL Extension Escape


The RowSetSize property of the ProviderCommand object allows applications to limit the size of the result set returned (refer to the DataDirect Connect for ADO.NET Users Guide and the data providers online help for information about the .NET objects supported). NOTE: The Entity Framework data providers use the ADO.NET Entity Framework programming contexts instead of using the RowSetSize property. Developers must set the property explicitly, for example: OracleCommand.RowSetSize = 10; PropertyInfo info = DbCommand cmd = conn.createCommand(); Although using the provider-specific RowSetSize property is convenient, it means that the programmer cannot code to generic ADO.NET interfaces such as IDbCommand. To increase the interoperability of the code, developers can use the SQL extension escape for the RowSetSize property instead. For example: my_SQL_statement {ext RowSetSize x} where my_SQL_statement is a SQL statement, and x is the number of rows to which the application wants the result set limited. This extension can be used with any SQL statement. However, if the statement generates no results, for example, a DELETE statement, then the extension has no effect. NOTE: The SQL extension escape must be placed at the end of the SQL statement. Otherwise, the database server may return a syntax error when the statement is executed.

DataDirect Connect Series for ADO.NET Reference

20

Chapter 1 Using SQL Escape Sequences in .NET Applications Using the RowSetSize SQL escape extension has the same affect as setting the ProviderCommand.RowSetSize property. However, the effect is limited to the result set created by the SQL statement. The RowSetSize SQL escape extension does not set the RowSetSize property on the Command object. Example SELECT * FROM mytable WHERE mycolumn2 >100 {ext RowSetSize 100} A maximum of 100 rows are returned from the result set. If the result set contains less than 100 rows, then the SQL extension escape has no affect. The size of the result sets that are created by subsequent SQL statements is not limited. If the application contains both the RowSetSize SQL extension escape and the RowSetSize property for a command, the escape takes precedence.

DataDirect Connect Series for ADO.NET Reference

21

Locking and Isolation Levels


This chapter discusses locking and isolation levels, and how their settings can affect the data that you retrieve. Different database systems support different locking and isolation levels. Refer the section "Isolation and Lock Levels Supported" in the chapter for each data provider in the DataDirect Connect Series for ADO.NET Users Guide.

Locking
Locking is a database operation that restricts a user from accessing a table or record. Locking is used in situations where more than one user might try to use the same table or record at the same time. By locking the table or record, the system ensures that only one user at a time can affect the data. Locking is critical in multiuser databases, where different users can try to access or modify the same records concurrently. Although such concurrent database activity is desirable, it can create problems. Without locking, for example, if two users try to modify the same record at the same time, they might encounter problems ranging from retrieving bad data to deleting data that the other user needs. If, however, the first user to access a record can lock that record to temporarily prevent other users from modifying it, such problems can be avoided. Locking provides a way to manage concurrent database access while minimizing the various problems it can cause.

Locking Modes and Levels


Different database systems employ various locking modes, but they have two basic modes in common: shared and exclusive. Shared locks can be held on a single object by multiple users. If one user has a shared lock on a record, then a second user can also get a shared lock on that same record; however, the second user cannot get an exclusive lock on that record. Exclusive locks are exclusive to the user that obtains them. If one user has an exclusive lock on a record, then a second user cannot get either type of lock on the same record. Performance and concurrency can also be affected by the locking level that is used in the database system. The locking level determines the size of an object that is locked in a database. For example, many database systems let you lock an entire table, as well as individual records. An intermediate level of locking, page-level locking, is also common. A page contains one or more records and is typically the amount of data that is read from the disk in a single disk access. The major disadvantage of page-level locking is that if one user locks a record, a second user may not be able to lock other records because they are stored on the same page as the locked record.

DataDirect Connect Series for ADO.NET Reference

22

Chapter 2 Locking and Isolation Levels

Isolation Levels
An isolation level represents a particular locking strategy that is employed in the database system to improve data consistency. The higher the isolation level, the more complex the locking strategy behind it. The isolation level that is provided by the database determines whether a transaction will encounter the following behaviors in data consistency: Dirty reads User 1 modifies a row. User 2 reads the same row before User 1 commits. User 1 performs a rollback. User 2 has read a row that has never really existed in the database. User 2 may base decisions on false data. User 1 reads a row but does not commit. User 2 modifies or deletes the same row and then commits. User 1 rereads the row and finds that it has changed (or has been deleted). User 1 uses a search condition to read a set of rows but does not commit. User 2 inserts one or more rows that satisfy this search condition, then commits. User 1 rereads the rows using the search condition and discovers rows that were not present before.

Non-repeatable reads

Phantom reads

Isolation levels represent the DBMSs ability to prevent these behaviors. The American National Standards Institute (ANSI) defines four isolation levels:

Read uncommitted (0) Read committed (1) Repeatable read (2) Serializable (3)

In ascending order (03), these isolation levels provide an increasing amount of data consistency to the transaction. At the lowest level, all three behaviors can occur. At the highest level, none can occur. The success of each level in preventing these behaviors is due to the locking strategies that they employ, which are as follows: Read uncommitted (0) Locks are obtained on modifications to the database and held until end of transaction (EOT). Reading from the database does not involve any locking. Locks are acquired for reading and modifying the database. Locks are released after reading, but locks on modified objects are held until EOT. Locks are obtained for reading and modifying the database. Locks on all modified objects are held until EOT. Locks obtained for reading data are held until EOT. Locks on unmodified access structures (such as indexes and hashing structures) are released after reading. A lock is placed on the affected rows of the DataSet until EOT. All access structures that are modified, and those used by the query, are locked until EOT.

Read committed (1)

Repeatable read (2)

Serializable (3)

DataDirect Connect Series for ADO.NET Reference

Isolation Levels Table 2-1 shows what data consistency behaviors can occur at each isolation level.

23

Table 2-1. Isolation Levels and Data Consistency Level 0, Read uncommitted 1, Read committed 2, Repeatable read 3, Serializable Dirty Read Yes No No No Nonrepeatable Read Yes Yes No No Phantom Read Yes Yes Yes No

Although higher isolation levels provide better data consistency, this consistency can be costly in terms of the concurrency that is provided to individual users. Concurrency is the ability of multiple users to access and modify data simultaneously. As isolation levels increase, so does the chance that the locking strategy used will create problems in concurrency. The higher the isolation level, the more locking involved, and the more time users may spend waiting for data to be freed by another user. Because of this inverse relationship between isolation levels and concurrency, you must consider how people use the database before choosing an isolation level. You must weigh the trade-offs between data consistency and concurrency, and decide which is more important.

DataDirect Connect Series for ADO.NET Reference

24

Chapter 2 Locking and Isolation Levels

DataDirect Connect Series for ADO.NET Reference

25

Using Your Data Provider with the ADO.NET Entity Framework


The ADO.NET Entity Framework is an object-relational mapping (ORM) framework for the .NET Framework. Developers can use it to create data access applications by programming against a conceptual application model instead of programming directly against a relational storage schema. This model allows developers to decrease the amount of code that must be written and maintained in data-centric applications. DataDirect Connect for ADO.NET Entity Framework data providers can be used with applications that use the standard ADO.NET features or with the ADO.NET Entity Framework.

The Oracle ADO.NET Entity Framework data provider can be used with applications that use the features of the standard .NET Framework 4.0 and the ADO.NET Entity Framework 4.1 and 4.2. To use Plain Old CLR Objects (POCO) entities or Model First, you must use .NET Framework 4.0. To use Code First, you must use ADO.NET Entity Framework 4.1 or 4.2. The DB2 and Sybase ADO.NET Entity Framework data providers can be used with applications that use the features of the standard .NET Framework 3.5 AP1, including ADO.NET Entity Framework functionality. This means that these data providers support the Database First approach. To use Plain Old CLR Objects (POCO) entities, you must use .NET Framework 4.0.

See the README text file shipped with your Progress DataDirect product for the file name of the ADO.NET Entity Framework data provider.

DataDirect Connect Series for ADO.NET Reference

26

Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework

Using the Database First Model


The Entity Framework creates a model of your data in Visual Studio. NOTE: Developing using the Database First model requires that you are using Microsoft .NET Framework Version 3.5 SP1 or higher, and Visual Studio 2008 SP1 or higher with the DataDirect Connect for ADO.NET Version 4.0 ADO.NET Entity Framework data providers. The following procedure uses the Oracle ADO.NET Entity Framework data provider, and assumes that you already have the database schema available. 1 2 Create a new application, such as Windows Console, Windows Forms, or ASP.NET, in Visual Studio. In the Solution Explorer, right-click the project and select Add / New Item. The Add New Item window appears.

DataDirect Connect Series for ADO.NET Reference

Using the Database First Model 3 Select ADO.NET Entity Data Model, and then click Add. The Choose Model Contents window appears.

27

DataDirect Connect Series for ADO.NET Reference

28

Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework 4 Select Generate from database, and then click Next. The Choose Your Data Connection window appears.

5 6

If you want to use an established connection, select it from the drop-down list and continue at Step 7. To create a new connection, continue at Step 6. Click New Connection... to create a new connection. a On the Choose Data Source window, select Other in the Data source list, then select your data provider, for example, Progress DataDirect Connect for ADO.NET Oracle Data Provider, in the Data provider drop-down list. Click Continue. The Connection Properties window appears. Provide the necessary connection information; then, click OK.

b c

DataDirect Connect Series for ADO.NET Reference

Using the Database First Model 7 The Wizard creates an Entity connection string. a b c If the radio buttons are selectable, select Yes, include the sensitive data in the connection string. In the Save entity connection settings... field, type a name for the name of the main data access class, or accept the default. Click Next. The Choose Your Database Objects window appears.

29

Select the database objects that you want to use in the model.

DataDirect Connect Series for ADO.NET Reference

30

Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework 9 Click Finish. The model is generated and opened in the Model Browser.

DataDirect Connect Series for ADO.NET Reference

Using Model First

31

Using Model First


NOTE: Developing using the Model First model requires that you are using Microsoft .NET Framework Version 4.0 or higher, and Visual Studio 2010 or higher with the DataDirect Connect for ADO.NET Version 4.0 ADO.NET Oracle Entity Framework data providers. The following procedure uses the Oracle ADO.NET Entity Framework data provider, and assumes that you already have the database schema available. 1 2 Create a new application, such as Windows Console, Windows Forms, or ASP.NET, in Visual Studio. Expand the project in Solution Explorer.

DataDirect Connect Series for ADO.NET Reference

32

Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework 3 Right-click the project and select Add / New Item.

DataDirect Connect Series for ADO.NET Reference

Using Model First 4 Select ADO.NET EntityDataModel, and then click Add. The Entity Data Model Wizard is displayed.

33

Select Empty Model, and then click Finish. An empty model is added to your application.

DataDirect Connect Series for ADO.NET Reference

34

Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework 6 Right-click the empty model and select Properties / DDL Generation Template. Click the drop-down menu and set the value to DataDirect SSDLToOracle.tt (VS).

DataDirect Connect Series for ADO.NET Reference

Using Model First 7 Design your model (refer to MSDN for a wide assortment of tutorials).

35

8 9

When you are satisfied with the model design, right-click the model and select Generate database from model. The Generate Database wizard is displayed. On the Choose Your Data Connection window, do the following steps: a Select or create a connection:

Select an existing connection from the drop-down list Click New Connection to create a new connection.
b c Choose whether to include sensitive information. Click Next. The DDL is generated.

DataDirect Connect Series for ADO.NET Reference

36

Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework 10 Click Finish. SQL is added to your application.

DataDirect Connect Series for ADO.NET Reference

Using Model First 11 Copy the DDL and execute against the connection using any tool.

37

After executing the DDL, the backend database is ready for use, with all of the database objects mapped to your model.

DataDirect Connect Series for ADO.NET Reference

38

Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework

Using the Code First Model


NOTE: Developing using the Code First model requires that you are using Microsoft .NET Framework Version 4.0 or higher, the ADO.NET Entity Framework 4.1 or 4.2, and Visual Studio 2010 or higher with the DataDirect Connect for ADO.NET Version 4.0 ADO.NET Entity Framework data providers. The following procedure uses the Oracle ADO.NET Entity Framework data provider, and assumes that you already have the database schema available. 1 Create a new application, such as Windows Console, Windows Forms, or ASP.NET, in Visual Studio and add an Entity Data Model (see Step 1 through Step 5 on page 28). The model is added to the application. namespace NameSpace { public class ModelContext : DbContext { public ModelContext () { } public ModelContext (string conn) : base(conn) { } public DbSet<TAB> Tabs { get; set; } } public class TAB { public string ID { get; set; } public string name { get; set; } public string col { get; set; } } } 2 Define a connection with the same name as the context. <connectionStrings> <add name="ModelContext" connectionString="host=host;port=151;user id=***;password=***;sid=sid; LicensePath=***" providerName="DDTek.Oracle" /> </connectionStrings> 3 Instantiate the context: ModelContext ctx = new ModelContext (); ctx.Tabs.Add(new TAB { ID = "ID1" }); ctx.SaveChanges();

DataDirect Connect Series for ADO.NET Reference

Using Stored Procedures to Provide Functionality

39

Using Stored Procedures to Provide Functionality


The Connection object includes properties and methods that provide reauthentication and enhanced statistics functionality. The methods and properties are standard in the ADO.NET data provider, but are not available at the ADO.NET Entity Framework layer. Instead, you expose the same functionality through "pseudo" stored procedures. This approach uses the Entity Data Model (EDM) to achieve results that correspond to the ADO.NET results. This in effect provides entities and functions backed by pseudo stored procedures. Table 3-1 lists the mapping of the data providers Connection properties to the corresponding pseudo stored procedure.

Table 3-1. Mapping to Pseudo Stored Procedure Connection Property CurrentPassword CurrentUser
1 1 1

Pseudo Stored Procedure DDTek_Connection_Reauthenticate DDTek_Connection_Reauthenticate DDTek_Connection_Reauthenticate DDTek_Connection_EnableStatistics DDTek_Connection_DisableStatistics Pseudo Stored Procedure DDTek_Connection_ResetStatistics DDTek_Connection_RetrieveStatistics

CurrentUserAffinityTimeout StatisticsEnabled Connection Method ResetStatistics RetrieveStatistics

1. Supported for the DB2 and Oracle Entity Framework data providers.

You can create a function mapping in the entity model to invoke the pseudo-stored procedure. Alternatively, applications can use the ObjectContext to create a stored procedure command as shown in the following C# code fragment: using (MyContext context = new MyContext()) { EntityConnection entityConnection = (EntityConnection)context.Connection; // The EntityConnection exposes the underlying store connection DbConnection storeConnection = entityConnection.StoreConnection; DbCommand command = storeConnection.CreateCommand(); command.CommandText = "DDTek_Connection_EnableStatistics"; command.CommandType = CommandType.StoredProcedure; command.Parameters.Add(new OracleParameter("cid", 1)); } bool openingConnection = command.Connection.State == ConnectionState.Closed; if (openingConnection) { command.Connection.Open(); } int result;

DataDirect Connect Series for ADO.NET Reference

40

Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework try { result = command.ExecuteNonQuery(); } finally { if (openingConnection && command.Connection.State == ConnectionState.Open) { command.Connection.Close(); } }

Extending Entity Framework Functionality


The Entity Framework offers powerful productivity gains by masking many ADO.NET features, simplifying application development. DataDirect Connect for ADO.NET Entity Framework data providers include functionality designed to optimize performance.

Configuration Options
The Entity Framework data provider defines options that configure performance and specific behaviors. These options exist in the machine.config, devenv.exe.config, edmgen.exe.config, app.config, and/or the web.config file. The product installer specifies default values for the Visual Studio 2008 and Visual Studio 2010 devenv.exe.config and the EdmGen.exe.config files. If necessary, you can alter the default values in the configuration files, for example, to enable the use of the Enterprise Library Logging Application Block. The performance and behavior of the EdmGen and Visual Studio tools, when using the Entity Framework data provider to create and manipulate ADO.NET Entity Data Models, can be affected by the data provider configuration options. For example, setting edmSchemaRestrictions to User can improve performance, but may not display all the database objects that you need for your entity model.

Specifying Data Provider Version Information in the .config File


To accommodate differences between earlier releases of the Entity Framework data provider, you can specify versioned configuration options in the configuration files. By default, non-versioned configuration entries apply to the latest release installed. Configuration options applicable to older releases must contain a version-specific identifier.

DataDirect Connect Series for ADO.NET Reference

Extending Entity Framework Functionality Suppose you have installed both Release 4.0 and Release 3.5 of the Oracle Entity Framework data provider. In the following example, the value of the edmSchemaRestrictions configuration option is set to User for Release 3.5, and to Accessible for Release 4.0. <ddtek.oracle.entity.3.5 edmSchemaRestrictions="User" /> <ddtek.oracle.entity edmSchemaRestrictions="Accessible" />

41

Specifying Enterprise Library Version Information in the .config File


You can specify in the .config file which version of the Enterprise Library that you want to use. By default, the data providers are configured to use Enterprise Library 5.0. However, if want to continue using Enterprise Library 4.1 (October 2008), you can do so by modifying the .config file. For example, to target the Enterprise Library 4.1 Logging Application Block, add the following entry: <ddtek.db2.entity LABAssemblyName="Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL" />

Designing an Entity Data Model


Building large models with the Entity Data Model (EDM) can be very inefficient. For optimal results, consider breaking up a model when it has reached 50 to 100 entities. In addition, carefully consider which objects are actually needed in the model. To restrict or expand the database objects available to you when generating a model, use the edmSchemaRestrictions configuration option. The option filters the schema objects that are returned when building the EDM that your application includes. Restricting the objects can also provide a performance improvement. The following configuration option entry for the Oracle Entity Framework data provider limits the objects available for the model to those for which the current user is the owner: <ddtek.oracle.entity edmSchemaRestrictions="User" /> Refer to DataDirect Connect Series for ADO.NET Users Guide for information for your Entity Framework data provider.

Logging Application Block Support


Applications that use the standard Logging Application Block (LAB) from the Microsoft Enterprise Library Version 5.0 or Version 4.1 (October 2008) and the related design patterns can quickly display the SQL generated as part of the DataDirect Connect data providers for ADO.NET Entity Framework. For more information, see "Logging Application Blocks" on page 54.

DataDirect Connect Series for ADO.NET Reference

42

Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework

Enhancing ADO.NET Entity Framework Performance


Although the Entity Framework offers powerful productivity gains, some developers believe that the Entity Framework takes too much control of the features they need to optimize performance in their applications. ADO.NET has a well-established set of relatively simple methods that can be used to enable and manage features such as connection statistics and reauthentication. DataDirect ADO.NET Entity Framework data providers include additional enhancements that can be used to enable, retrieve and reset statistical counters on a connection. Developers can use these enhancements to determine and then ultimately improve the applications runtime performance. The Entity Framework includes a similar set of methods that have been tailored to be useful for Entity Framework consumers such as LINQ, EntitySQL, and ObjectServices. This functionality is modeled in the XML file provided in Appendix A Using an .edmx File on page 105. By surfacing the DDTekConnectionStatistics and DDTekStatus entities, you can quickly model this code using the standard tooling.

Obtaining Connection Statistics


To obtain connection statistics, first, establish an Entity Container, DDTekConnectionContext, in which two Entity Sets, DDTekConnectionStatistics and DDTekStatus, are defined. Then, to interact with each Entity, include functions to retrieve results. The following C# code fragment shows how you gain access to these statistics using ObjectContext: DTekConnectionContext objCtx = new DDTekConnectionContext(); DDTekStatus status = objCtx.DisableStatistics().First(); MessageBox.Show("StatisticsEnabled = " + status.StatisticsEnabled); status = objCtx.EnableStatistics().First(); MessageBox.Show("StatisticsEnabled = " + status.StatisticsEnabled); DDTekConnectionStatistics statistics = objCtx.RetrieveStatistics().First(); MessageBox.Show("BytesReceived/Sent: " + statistics.BytesReceived + "/" + statistics.BytesSent); where DDTekConnectionContext is declared in the app.config file: <add name="DDTekConnectionContext" connectionString="metadata=res://*/Model1.csdl|res://*/Model1.ssdl|res://*/Model1.msl;provider= DDTek.Oracle;provider connection string=&quot;Host=nc-lnx02;Password=login4;Pooling=False;SID= CP31;User ID=login4;Reauthentication Enabled=true&quot;" providerName="System.Data.EntityClient" /> For more information about the data providers support for connection statistics, refer to the DataDirect Connect Series for ADO.NET Users Guide.

DataDirect Connect Series for ADO.NET Reference

Enhancing ADO.NET Entity Framework Performance

43

Implementing Reauthentication
Typically, you can configure a connection pool to provide scalability for connections. In addition, to help minimize the number of connections required in a connection pool, you can switch the user associated with a connection to another user, a process known as reauthentication. For example, suppose you are using Kerberos authentication to authenticate users using their operating system user name and password. To reduce the number of connections that must be created and managed, you may want to switch the user associated with a connection to multiple users using reauthentication. For example, suppose your connection pool contains a connection, Conn, which was established using the user ALLUSERS. You can have that connection service multiple users, User A, B, C, and so on, by switching the user associated with the connection Conn to User A, B, C, and so on. For more information about the data providers support for reauthentication, refer to the DataDirect Connect Series for ADO.NET Users Guide. This functionality is modeled in the XML file provided in Appendix A Using an .edmx File on page 105. By surfacing the DDTekConnectionStatistics and DDTekStatus entities, you can quickly model this code using the standard tooling. First, we establish an Entity Container, DDTekConnectionContext, in which we have two Entity Sets: DDTekConnectionStatistics and DDTekStatus. To interact with each Entity, include functions to retrieve results. The following C# code fragment shows how you gain access to these statistics: DTekConnectionContext objCtx = new DDTekConnectionContext(); try { MessageBox.Show("CurrentUser = " + status.CurrentUser); status = objCtx.Reauthenticate("login5", "login5", 600).First(); MessageBox.Show("CurrentUser = " + status.CurrentUser); } catch (Exception ex) { MessageBox.Show(ex.ToString()); } where DDTekConnectionContext is declared in the app.config file. <add name="DDTekConnectionContext" connectionString="metadata=res://*/Model1.csdl|res://*/Model1.ssdl|res://*/Model1.msl;provider= DDTek.Oracle;provider connection string=&quot;Host=nc-lnx02;Password=login4;Pooling=False;SID= CP31;User ID=login4;Reauthentication Enabled=true&quot;" providerName="System.Data.EntityClient" /> For more information about your data providers support for reauthentication, refer to the DataDirect Connect Series for ADO.NET Users Guide.

DataDirect Connect Series for ADO.NET Reference

44

Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework

Model First and Code First Support


The Oracle Entity Framework data provider supports the Model First and Code First development concepts. Model First was introduced in .NET Framework 4.0. It is focused around the ability to start with a conceptual model and create the database from it. Additional configuration can be supplied using Data Annotations or via a fluent API. Code First was introduced in Entity Framework 4.1. It is focused around defining your model using C#/Visual Basic .NET classes. These classes can then be mapped to an existing database or be used to generate a database schema. Additional configuration can be supplied using Data Annotations or via a fluent API. For detailed information, refer to Beginners Guide to the ADO.NET Entity Framework and other pages on the Microsoft Web site.

Using the Performance Tuning Wizard


You can use the Performance Wizard whether you are using the ADO.NET Entity Framework data provider or the standard ADO.NET data provider. The Performance Wizard can also be used with applications that are used with the Data Access Application Blocks and Logging Application Blocks.

Mapping EDM Canonical Functions to Data Source Functions


The ADO.NET Entity Framework translates the Entity Data Model (EDM) canonical functions to the corresponding data source functionality for the ADO.NET Entity data provider. The function invocations are expressed in a common form across data sources. Because these canonical functions are independent of data sources, argument and return types of canonical functions are defined in terms of types in the EDM. When an Entity SQL query uses canonical functions, the appropriate function is called at the data source. Both null-input behavior and error conditions are explicitly specified for all canonical functions. However, the ADO.NET Entity Framework does not enforce this behavior. Further details are available at: http://msdn.microsoft.com/en-us/library/bb738626.aspx For more information about mapping edm canonical functions, refer to the DataDirect Connect Series for ADO.NET Users Guide.

DataDirect Connect Series for ADO.NET Reference

Implementation Differences for the ADO.NET Entity Framework Provider

45

Implementation Differences for the ADO.NET Entity Framework Provider


Much of the functionality in the ADO.NET Entity Framework data provider is the same as in the ADO.NET data provider. In some cases, default values and coding implementation differ to enhance the performance of ADO.NET Entity Framework applications. The ADO.NET Entity Framework programming contexts inherently eliminate the need to use some ADO.NET methods and properties. These properties and methods remain useful for standard ADO.NET applications. The online help, which is integrated into Visual Studio, describes the public methods and properties of each class. Refer to the DataDirect Connect Series for ADO.NET Users Guide for a list of the connection string options that have different default values when used with an ADO.NET Entity Framework application.

For More Information


Refer to the following sources for additional information about the ADO.NET and the Entity Framework:

The DataDirect Connect for ADO.NET web page provides the additional information and examples about using the data provider. The DataConnections blog provides the latest information about our support for the ADO.NET Entity Framework and provides other information about the DataDirect Connect ADO.NET data providers. Programming Entity Framework by Julie Lerman provides a comprehensive discussion of using the ADO.NET Entity Framework. ADO.NET Entity Framework introduces the Entity Framework and provides links to numerous detailed articles. Connection Strings (Entity Framework) describes how connection strings are used by the Entity Framework. The connection strings contain information used to connect to the underlying ADO.NET data provider as well as information about the required Entity Data Model mapping and metadata. Working with POCO Entities explains how you can use existing domain objects and other CLR objects with your data model. Performance Considerations (Entity Framework) describes some implementation considerations for improving the performance of Entity Framework applications. Entity Data Model Tools describes the tools that help you to build applications graphically with the EDM: the Entity Data Model Wizard, the ADO.NET Entity Data Model Designer (Entity Designer), and the Update Model Wizard. These tools work together to help you generate, edit, and update an Entity Data Model. LINQ to Entities enables developers to write queries against the database from the same language used to build the business logic. DataDirect Connect Series for ADO.NET Reference

46

Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework

DataDirect Connect Series for ADO.NET Reference

47

Using the Microsoft Enterprise Library


Using the Microsoft Enterprise Library can simplify application development by wrapping common tasks, such as data access, into portable code that makes it easier to move your application from one DBMS to another. DataDirect Connect for ADO.NET data providers can be used with Data Access Application Blocks (DAAB). The classes in the DAABs provide access to the most frequently used features of ADO.NET. Applications can use the DAABs for tasks such as passing data through application layers and returning changed data back to the database. Using DAABs eliminates the need to keep writing the same data access tasks for each new or revised application, so you can spend your time more productively. Applications that use the standard Logging Application Block and design patterns can quickly display the SQL that is generated as part of DataDirect Connect for ADO.NET data providers that support the Microsoft ADO.NET Entity Framework. To use features of the Enterprise Library with your data provider, download Microsoft Enterprise Library 5.0 (April 2010) from http://www.codeplex.com/entlib. The Enterprise Library 5.0 documentation contains detailed information about using the application blocks. NOTE: Enterprise Library 5.0 requires Windows 7, Windows Vista SP2, or Windows Server 2003 SP2. If you are using the data providers on Windows XP SP2, you can use Enterprise Library 4.1 (October 2008). See Appendix B Using Enterprise Library 4.1 on page 111 for information on configuration information.

Data Access Application Block Overview


The Data Access Application Block (DAAB) is designed to allow developers to replace ADO.NET boiler-plate code with standardized code for everyday database tasks. The overloaded methods in the Database class can:

Return scalar values. Determine which parameters are needed and create them. Involve commands in a transaction.

If your application needs to address specific DBMS functionality, you can use a DataDirect Connect for ADO.NET data provider.

When Should You Use DAABs?


DAABs include a small number of methods that simplify the most common methods of accessing a database. Each method encapsulates the logic required to retrieve the data and manage the connection to the database. You should consider using the application block if your application uses standard data access techniques.

DataDirect Connect Series for ADO.NET Reference

48

Chapter 4 Using the Microsoft Enterprise Library The DAAB is used with ADO.NET, increasing efficiency and productivity when creating applications for ADO.NET. The abstract Database class provides a number of methods, such as ExecuteNonQuery, ExecuteReader, and ExecuteScalar, that are the same as the methods that are used by the DbCommand class, or, if you are using database-specific code, a data provider-specific class such as OracleCommand. Although using the default DAAB during development is convenient, the resulting application lacks portability. When you use the provider-specific DataDirect Connect for ADO.NET DAAB implementation, the application includes the DataDirect Connect data providers SQL leveling capabilities. You have more flexibility, whether your application needs to access multiple databases, or whether you anticipate a change in your target data source.

Should You Use Generic or Database-specific Classes?


The application block supplements the code in ADO.NET that allows you to use the same code with different database types. You have two choices when using the DAAB with DataDirect Connect for ADO.NET:

The GenericDatabase class The provider-specific DAAB implementation

The GenericDatabase class option is less suited to applications that need specific control of database behaviors. For portability, the GenericDatabase solution is the optimal approach. If your application needs to retrieve data in specialized way, or if your code needs customization to take advantage of features specific to a DBMS, using the DataDirect Connect for ADO.NET data provider for that DBMS might be better suited to your needs.

Configuring the DAAB


Before you can configure the DAAB for use with your application, you must set up the environment: 1 2 Make sure that you have installed Microsoft Enterprise Library 5.0. Open the DataDirect Enterprise Library project for your data provider, located in install_dir\Enterprise Libraries\Src\CS\. The default configurations were created with Microsoft Enterprise Library 5.0. Compile your project and note the output directory.

Configuring the Data Access Application Block consists of two procedures:


"Adding a New DAAB Entry" on page 49 "Adding the Data Access Application Block to Your Application" on page 52

DataDirect Connect Series for ADO.NET Reference

Data Access Application Block Overview

49

Adding a New DAAB Entry


Now, use the Enterprise Library Configuration Tool to add a new DAAB entry. This procedure uses the configuration options for the .NET Framework 3.5 and the DB2 Enterprise Library sample solution. To configure the Data Application Block on any supported platform: 1 2 3 Right-click on your project in Solution Explorer. Select Add / New Item. The Add New Item window appears. Select the Application Configuration File template. Then, click Add. The App.config file is added to the the project. Select Start / Programs / Microsoft patterns and practices / Enterprise Library 5.0 / Enterprise Library Configuration / EntLib Config .NET 3.5. The Enterprise Library Configuration window appears.

DataDirect Connect Series for ADO.NET Reference

50

Chapter 4 Using the Microsoft Enterprise Library 4 Select File / Open. Then, select the App.config file you created in Step 3 and click OK. The App.config file is displayed.

Click the expander arrow button ( the configured connection strings.

) to the left of the Database Settings title to display

DataDirect Connect Series for ADO.NET Reference

Data Access Application Block Overview 6 Click the plus sign button ( ) in the Database Instances column and select Add Database Connection String. This adds a new connection string item to the configuration.

51

7 8

In the Name field, enter a name for the DAABs connection string, for example, MyDB2conn. In the Connection String field, click the ellipsis button ( ) to display the Edit Text Value dialog box. Type or paste a connection string in the text box and click OK. For example, type: Database Name=DEV1DB9A;Host=dev1;Port=6070;User ID=TEST01;Encryption Method=SSL;AuthenticationMethod=Kerberos;

In the Database Provider drop-down list, select the data provider. For example, select DDTek.DB2.4.0 for the DB2 data provider.

DataDirect Connect Series for ADO.NET Reference

52

Chapter 4 Using the Microsoft Enterprise Library 10 Click the chevron button ( ) to the right of the Database Settings title. In the Default Database Instance drop-down list, select the instance that you want to use, in this example, dev1.DEV1DB9A.

11 Select File / Save.

Adding the Data Access Application Block to Your Application


To add the DAAB to a new or existing application, perform these steps: 1 Add the following References to your Visual Studio solution:

Microsoft.Practices.EnterpriseLibrary.Common.dll Microsoft.Practices.EnterpriseLibrary.Data.dll Microsoft.Practices.ServiceLocation.dll

Add the following directive to your C# source code: using Microsoft.Practices.EnterpriseLibrary.Data; using Microsoft.Practices.EnterpriseLibrary.Common.Configuration; using System.Data;

Rebuild the solution to ensure that the new dependencies are functional.

DataDirect Connect Series for ADO.NET Reference

Data Access Application Block Overview 4 Determine the output Debug or Release path location of your current solution, and switch back to the Enterprise Library Configuration window (see "Adding a New DAAB Entry" on page 49). Right-click the connection string under the Application Node and select Save Application. Navigate to the Debug or Release output directories of your current solution, and locate the .exe file of the current solution. Using File Explorer, copy the DDTek.EnterpriseLibrary.Data.XXX.dll into your applications working directory.

53

5 6 7

Using the Data Access Application Block in Application Code


Now that you have configured the DAAB, you can build applications on top of this DAAB. In the following example, we use the DAAB MyDB2 and the DatabaseFactory to generate an instance of a Database object backed by a DB2 data source. using System; using System.Collections.Generic; Microsoft.Practices.ServiceLocation; using Microsoft.Practices.ServiceLocation; using System.Text; using Microsoft.Practices.EnterpriseLibrary.Data; using Microsoft.Practices.EnterpriseLibrary.Common; using System.Data; namespace DAAB_Test_App_1 { class Program { static void Main(string[] args) { Database database = DatabaseFactory.CreateDatabase("MyDB2"); DataSet ds = database.ExecuteDataSet(CommandType.TableDirect, "SQLCOMMANDTEST_NCSRVR_1"); } } } The Microsoft Enterprise Library DAAB coding patterns are now at your disposal.

Using the DAAB Classes with Enterprise Library Version 4.1


If you need to target Enterprise Library 4.1, open the DataDirect Enterprise Library project for your data provider, located in install_dir\Enterprise Libraries\Src\CS\ and use the Debug41 and Release41 configurations.

DataDirect Connect Series for ADO.NET Reference

54

Chapter 4 Using the Microsoft Enterprise Library

Logging Application Blocks


Using the Enterprise Library Logging Application Block (LAB) makes it easier to implement common logging functions. DataDirect Connect data providers that support the ADO.NET Entity Framework use the standard Logging Application Block and design patterns, and offer LAB customizations for additional functionality. To use features of the Enterprise Library with your data provider, download Microsoft Enterprise Library from http://www.codeplex.com/entlib. The Enterprise Library installation by default includes the Enterprise Library documentation, which contains detailed information about using the application blocks. NOTE: Enterprise Library 5.0 requires Windows 7, Windows Vista SP2, or Windows Server 2003 SP2. If you are using the data providers on Windows XP, you can use Enterprise Library 4.1 (October 2008).

When Should You Use the LAB?


The DataDirect ADO.NET Entity Framework data providers include a set of LAB customizations that are useful for developing with the ADO.NET Entity Framework when you want to log the Command Trees and SQL generated when using the data provider.

Configuring the LAB


Logging capability can be added to an application by adding an entry to an applications app.config or web.config configuration file using the Enterprise Library configuration tool. This tool contains specific instructions in order to enable the Logging Application Block config file. The tool also contains the necessary AppSetting to enable the LAB. To enable Logging Application Block output, set the environment property DDTek_Enable_Logging_Application_Block_Trace to true. Alternatively, in the app.config file, set the EnableLoggingApplicationBlock AppSetting property to true. To disables the Logging Application Block, set either of these properties to false. The following configuration XML snippet from the app.config file enables logging for the Oracle Entity Framework data provider. <configuration> <configSections> <section name="ddtek.oracle.entity" type= "DDTek.Oracle.Entity.OracleEntitySettings, DDTek.Oracle.Entity, Version= 3.5.0.0, Culture=neutral, PublicKeyToken=c84cd5c63851e072" /> </configSections> <ddtek.oracle.entity EnableLoggingApplicationBlock="true" /> The SQL logged to the Logging Block is the SQL that is ultimately transmitted to the data source.

DataDirect Connect Series for ADO.NET Reference

Logging Application Blocks The following procedure uses the configuration options for the .NET Framework 3.5. To configure the Logging Application Block on any supported platform: 1 Select Start / Programs / Microsoft patterns and practices / Enterprise Library 5.0 / Enterprise Library Configuration / EntLib Config .NET 3.5. The Enterprise Library Configuration window appears.

55

DataDirect Connect Series for ADO.NET Reference

56

Chapter 4 Using the Microsoft Enterprise Library 2 Select Blocks / Add Logging Settings. Additional fields appear on the New Configuration window.

Add a flat file trace listener file: a b c Click the Logging Target Listeners plus sign button ( ). Then, select Add Logging Target Listeners / Add Flat File Trace Listener. In the properties pane next to the File Name property, click the ellipsis button ( The Open File window appears. ).

Browse to the target location for the file. Then, type a file name for the trace listener file, and click Open. In this example, the file name is labtrace.log.

DataDirect Connect Series for ADO.NET Reference

Logging Application Blocks 4 Click the plus sign button ( ) to the right of the Categories heading; then, select Add Category. The Category section is expanded.

57

Define the characteristics of the new category: a b c In the Name field, type the name of the new category. In this example, the category DDTek Error will be created. Click the plus sign button ( ) next to the Listeners heading. Then, select Flat File Trace Listener from the drop-down list. From the Minimum Severity drop-down list, select Error.

DataDirect Connect Series for ADO.NET Reference

58

Chapter 4 Using the Microsoft Enterprise Library 6 Repeat Step 3 through Step 5 to create the following categories:

DDTek Information: Information not related to errors DDTek Command: Enables SQL, Parameter, and DbCommandTree logging

Select File / Save As. The Save Configuration File window appears. Type a name for your configuration file. By default, the file is saved to C:\Program Files\Microsoft Enterprise Library 5.0\Bin\filename.exe.config, where filename is the name that you typed in the Save Configuration File window.

Using the LAB in Application Code


The LAB that you configured must be added to the app.config or web.config file for your application. Table 4-1 describes settings you can use to enable and configure the data provider's interaction with the LAB.

Table 4-1. LAB Configuration Settings Configuration Setting enableLoggingApplicationBlock labLogEntryTypeName Description Enables the Logging Application Block. Specifies the LogEntry type name for the LogEntry object.

DataDirect Connect Series for ADO.NET Reference

Logging Application Blocks

59

Table 4-1. LAB Configuration Settings (cont.) Configuration Setting labLoggerTypeName labAssemblyName Description Specifies the Logger type name for the Logging Application Block. Specifies the assembly name to which the Logging Application Block applies. NOTE: If you are using any version of the LAB other than the Microsoft Enterprise Library 5.0 binary release, you must set the labAssemblyName. For example, if you are using an older or newer version of the LAB, or a version that you have customized, you must specify a value for labAssemblyName.

The following code fragment provides an example of a Logging Application Block that could be added to an Oracle data access application. <loggingConfiguration name="Logging Application Block" tracingEnabled="true" defaultCategory="" logWarningsWhenNoCategoriesMatch="true"> <listeners> <add fileName="rolling.log" footer="----------------------------------------" header="----------------------------------------" rollFileExistsBehavior="Overwrite" rollInterval="None" rollSizeKB="0" timeStampPattern="yyyy-MM-dd" listenerDataType= "Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.RollingFlatFileTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken= 31bf3856ad364e35" traceOutputOptions="None" filter="All" type= "Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.RollingFlatFileTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken= 31bf3856ad364e35" name="Rolling Flat File Trace Listener" /> </listeners> <formatters> <add template="Message: {message}&#xD;&#xA;Category: {category}&#xD;&#xA;Priority: {priority}&#xD;&#xA;EventId: {eventid}&#xD;&#xA;Severity: {severity}&#xD;&#xA;Title:{title}&#xD;&#xA;&#xD;&#xA;" type="Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.TextFormatter, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken= 31bf3856ad364e35" name="Text Formatter" /> </formatters> <categorySources> <add switchValue="All" name="DDTek"> <listeners> <add name="Rolling Flat File Trace Listener" /> </listeners> DataDirect Connect Series for ADO.NET Reference

60

Chapter 4 Using the Microsoft Enterprise Library </add> </categorySources> <specialSources> <allEvents switchValue="All" name="All Events" /> <notProcessed switchValue="All" name="Unprocessed Category" /> <errors switchValue="All" name="Logging Errors &amp; Warnings"> <listeners> <add name="Rolling Flat File Trace Listener" /> </listeners> </errors> </specialSources> </loggingConfiguration>

Using Different Versions of the Logging Application Block


By default, the Entity Framework data providers use the Enterprise Library 5.0 Logging Application Block. If you need to use a different version of the Logging Application Block, you can specify the labAssemblyName setting in your .config file. See "Specifying Enterprise Library Version Information in the .config File" on page 41 for more information.

For More Information


The following sources provide additional information about using Application Blocks:

The Microsoft patterns & practices Developer Center includes an overview on Application Block topics: http://msdn.microsoft.com/en-us/library/ff632023.aspx. The Microsoft patterns & practices Enterprise Library includes an FAQ section on using Logging Application Blocks: http://entlib.codeplex.com/Wiki/View.aspx?title= EntLib%20FAQ. The Microsoft Enterprise Library 5.0 Hands-on Labs provide detailed examples that help you learn about the application blocks: http://entlib.codeplex.com/ "The Data Access Application Block" provides an overview of tasks and applications using Data Access Application Blocks: http://msdn.microsoft.com/en-us/library/ff664408(v=PandP.50).aspx. The DataConnections blog provides the latest information about our support for the ADO.NET Entity Framework and provides other information about DataDirect Connect ADO.NET data providers.

DataDirect Connect Series for ADO.NET Reference

61

Getting Schema Information


Applications can request that data providers find and return metadata for a database. Schema collections specific to each data provider expose database schema elements such as tables and columns. The data provider uses the GetSchema method of the Connection class. You can also retrieve schema information from a result set, as described in "Columns Returned by the GetSchemaTable Method" on page 61. Each data provider also includes provider-specific schema collections. Using the schema collection name MetaDataCollections, you can return a list of the supported schema collections, and the number of restrictions that they support.

Columns Returned by the GetSchemaTable Method


While a DataReader is open, you can retrieve schema information from the result set. For each data provider, the result set produced for PrefixDataReader.GetSchemaTable() returns the columns described in Table 5-1, in the order shown. Table 5-1. Columns Returned by GetSchemaTable on DataReader Column ColumnName Description Specifies the name of the column, which might not be unique. If the name cannot be determined, a null value is returned. This name reflects the most recent renaming of the column in the current view or command text. Specifies the ordinal of the column, which cannot be null. The bookmark column of the row, if any, is 0. Other columns are numbered starting with 1. Specifies the maximum possible length of a value in the column. For columns that use a fixed-length data type, this is the size of the data type. If the ProviderType column is a numeric data type, this is the maximum precision of the column. If column type is not a numeric data type, the value is null. NumericScale If the column data type has a scale component, specifies the number of digits to the right of the decimal point. NumericScale applies to types with fractional seconds, such as Time and DateTime types. Otherwise, this is a null value. DataType The underlying type of the column. For the DB2 data provider, if Xml Describe Type is set to Binary, this should be a System.Byte[]. Otherwise, this should be System.String. Refer to the DataDirect Connect for ADO.NET Users Guide for information about setting connection string options. This value cannot be null.

ColumnOrdinal ColumnSize NumericPrecision

DataDirect Connect Series for ADO.NET Reference

62

Chapter 5 Getting Schema Information

Table 5-1. Columns Returned by GetSchemaTable on DataReader (cont.) Column ProviderType Description Specifies the provider-defined indicator of the column's data type. This column cannot be null. If the data type of the column varies from row to row, this must be Object. For the DB2 data provider, if Xml Describe Type is set to Binary, this should be a System.Byte[]. Otherwise, this should be System.String. IsLong Set if the column contains a BLOB, CLOB, LONG VARBINARY, LONG VARCHAR, or (for DB2) LONG VARGRAPHIC, that contains very long data. The definition of very long data is provider-specific. AllowDBnull IsReadOnly IsRowVersion IsUnique Set to true if the AllowDbNull constraint is set to true for the column. Otherwise, the value is false. The value is true if the column can be modified; otherwise, the value is false. Is set if the column contains a persistent row identifier that cannot be written to, and has no meaningful value except to identify the row. Specifies whether the column constitutes a key by itself or if there is a constraint of type UNIQUE that applies only to this column. When set to true, no two rows in the base table (the table returned in BaseTableName) can have the same value in this column. When set to false, the column can contain duplicate values in the base table. IsKey When set to true, the column is one of a set of columns that, taken together, uniquely identify the row in the DataTable. The set of columns with IsKey set to true must uniquely identify a row in the DataTable that may be generated from a DataTable primary key. When set to false, the column is not required to uniquely identify the row. IsAutoIncrement Specifies whether the column assigns values to new rows in fixed increments. When set to true, the column assigns values to new rows in fixed increments. When set to false, the column does not assign values to new rows in fixed increments. BaseSchemaName BaseCatalogName BaseTableName BaseColumnName Specifies the name of the schema in the database that contains the column. The value is null if the base schema name cannot be determined. Specifies the name of the catalog in the data store that contains the column. A null value is used if the base catalog name cannot be determined. Specifies the name of the table or view in the data store that contains the column. A null value is used if the base table name cannot be determined. Specifies the name of the column in the data store. This might be different than the column name returned in the ColumnName column if an alias was used. A null value is used if the base column name cannot be determined or if the rowset column is derived from, but is not identical to, a column in the database. IsAliased IsExpression Specifies whether the name of the column is an alias. The value true is returned if the column name is an alias; otherwise, false is returned. Specifies whether the name of the column is an expression. The value true is returned if the column is an expression; otherwise, false is returned.

DataDirect Connect Series for ADO.NET Reference

Retrieving Schema Metadata with the GetSchema Method

63

Table 5-1. Columns Returned by GetSchemaTable on DataReader (cont.) Column IsIdentity IsHidden Description Specifies whether the name of the column is an identity column. The value true is returned if the column is an identity column; otherwise, false is returned. Specifies whether the name of the column is hidden. The value true is returned if the column is hidden; otherwise, false is returned.

Retrieving Schema Metadata with the GetSchema Method


Applications use the GetSchema method of the Connection object to retrieve Schema Metadata about a data provider and/or data source. Each provider implements a number of Schema collections, including the five standard metadata collections:

"MetaDataCollections Schema Collections" on page 63 "DataSourceInformation Schema Collection" on page 64 "DataTypes Collection" on page 65 "ReservedWords Collection" on page 67 "Restrictions Collection" on page 67

Additional collections are specified and must be supported to return Schema information from the data provider. See "Additional Schema Metadata Collections" on page 68 for details about the other collections supported by the data providers. NOTE: Refer to the .NET Framework documentation for additional background functional requirements, including the required data type for each ColumnName.

MetaDataCollections Schema Collections


The MetaDataCollections schema collection is a list of the schema collections that are available to the logged in user. The MetaDataCollection can return the supported columns described in Table 5-2 in any order. Table 5-2. Columns Returned by the MetaDataCollections Schema Collection ColumnName CollectionName NumberOfRestrictions NumberOfIdentifierParts Description The name of the collection to pass to the GetSchema method to return the collection The number of restrictions that may be specified for the collection The number of parts in the composite identifier/data base object name

DataDirect Connect Series for ADO.NET Reference

64

Chapter 5 Getting Schema Information

DataSourceInformation Schema Collection


The DataSourceInformation schema collection can return the supported columns, described in Table 5-3, in any order. Table 5-3. Columns Returned by the DataSourceInformation Collection ColumnName CompositeIdentifierSeparatorPattern DataSourceProductName DataSourceProductVersion DataSourceProductVersionNormalized DefaultSchema GroupByBehavior Host IdentifierCase IdentifierPattern OrderByColumnsInSelect Description The regular expression to match the composite separators in a composite identifier. The name of the product accessed by the data provider. The version of the product accessed by the data provider, in the data sources native format. A normalized version for the data source. This allows the version to be compared with String.Compare(). The default schema in which data source interaction operates if a schema is not specified. Specifies the relationship between the columns in a GROUP BY clause and the non-aggregated columns in the select list. The host to which the data provider is connected. Indicates whether non-quoted identifiers are treated as case sensitive. A regular expression that matches an identifier and has a match value of the identifier. Specifies whether columns in an ORDER BY clause must be in the select list. A value of true indicates that they are required to be in the Select list, a value of false indicates that they are not required to be in the Select list. A format string that represents how to format a parameter. A regular expression that matches a parameter marker. It will have a match value of the parameter name, if any. The maximum length of a parameter name in characters. A regular expression that matches the valid parameter names. Indicates whether quoted identifiers are treated as case sensitive. A regular expression that matches a quoted identifier and has a match value of the identifier itself without the quotation marks. A regular expression that matches the statement separator. A regular expression that matches a string literal and has a match value of the literal itself. Specifies the types of SQL join statements that are supported by the data source. Specifies whether the data source supports reauthentication.

ParameterMarkerFormat ParameterMarkerPattern ParameterNameMaxLength ParameterNamePattern QuotedIdentifierCase QuotedIdentifierPattern StatementSeparatorPattern StringLiteralPattern SupportedJoinOperators SupportsReauthentication

DataDirect Connect Series for ADO.NET Reference

Retrieving Schema Metadata with the GetSchema Method Table 5-4 lists the provider-specific ColumnNames: Table 5-4. Provider-specific ColumnNames Data Provider Oracle Oracle SQL Server ColumnName SID ServiceName NameInstance Description SID of the data source1 Service name from the tnsnames.ora file An instance of SQL Server running on a host

65

1. SID and ServiceName are mutually exclusive in a connection string or data source.

DataTypes Collection
Table 5-5 describes the supported columns of the DataTypes schema collection. The columns can be returned in any order. Table 5-5. ColumnNames Returned by the DataTypes Collection ColumnName ColumnSize CreateFormat CreateParameters Description The length of a non-numeric column or parameter; refers to either the maximum or the length defined for this type by the data provider. Format string that represents how to add this column to a data definition statement, such as CREATE TABLE. The creation parameters that must be specified when creating a column of this data type. Each creation parameter is listed in the string, separated by a comma in the order they are to be supplied. For example, the SQL data type DECIMAL needs a precision and a scale. In this case, the creation parameters should contain the string "precision, scale". In a text command to create a DECIMAL column with a precision of 10 and a scale of 2, the value of the CreateFormat column might be DECIMAL({0},{1})" and the complete type specification would be DECIMAL(10,2). DataType IsAutoIncrementable The name of the .NET Framework type of the data type. Specifies whether values of a data type are auto-incremented. true: Values of this data type may be auto-incremented. false: Values of this data type may not be auto-incremented. IsBestMatch Specifies whether the data type is the best match between all data types in the data store and the .NET Framework data type that is indicated by the value in the DataType column. true: The data type is the best match. false: The data type is not the best match.

DataDirect Connect Series for ADO.NET Reference

66

Chapter 5 Getting Schema Information

Table 5-5. ColumnNames Returned by the DataTypes Collection (cont.) ColumnName IsCaseSensitive Description Specifies whether the data type is both a character type and case-sensitive. true: The data type is a character type and is case-sensitive. false: The data type is not a character type or is not case-sensitive. IsConcurrencyType true: The data type is updated by the database every time the row is changed and the value of the column is different from all previous values. false: The data type is not updated by the database every time the row is changed. IsFixedLength true: Columns of this data type created by the data definition language (DDL) will be of fixed length. false: Columns of this data type created by the DDL will be of variable length. IsFixedPrecisionScale IsLiteralsSupported IsLong true: The data type has a fixed precision and scale. false: The data type does not have a fixed precision and scale. true: The data type can be expressed as a literal. false: The data type cannot be expressed as a literal. true: The data type contains very long data. The definition of very long data is provider-specific. false: The data type does not contain very long data. IsNullable IsSearchable true: The data type is nullable. false: The data type is not nullable. true: The data type contains very long data. The definition of very long data is provider-specific. false: The data type does not contain very long data. IsSearchableWithLike IsUnisgned LiteralPrefix LiteralSuffix MaximumScale true: The data type can be used with the LIKE predicate. false: The data type cannot be used with the LIKE predicate. true: The data type is unsigned. false: The data type is signed. The prefix applied to a given literal. The suffix applied to a given literal. If the type indicator is a numeric type, this is the maximum number of digits allowed to the right of the decimal point. Otherwise, this is DBNull.Value. NativeDataType ProviderDbType TypeName An OLE DB-specific column for exposing the OLE DB type of the data type. The provider-specific type value that should be used when specifying a parameter's type. The provider-specific data type name.

DataDirect Connect Series for ADO.NET Reference

Retrieving Schema Metadata with the GetSchema Method

67

ReservedWords Collection
This schema collection exposes information about the words that are reserved by the database to which the data provider is connected. Table 5-6 describes the columns that the data provider supports.

Table 5-6. ReservedWords Schema Collection ColumnName Reserved Word Description Provider-specific reserved words

Restrictions Collection
The Restrictions schema collection exposes information about the restrictions that are supported by the data provider that is currently connected to the database. Table 5-7 describes the columns that are returned by the data providers. The columns can be returned in any order. DataDirect Connect for ADO.NET data providers use standardized names for restrictions. If a data provider supports a restriction for a Schema method, it always uses the same name for the restriction. The case sensitivity of any restriction value is determined by the underlying database, and can be determined by the IdentifierCase and QuotedIdentifierCase values in the DataSourceInformation collection (see "DataSourceInformation Schema Collection" on page 64). Table 5-7. ColumnNames Returned by the Restrictions Collection ColumnName CollectionName RestrictionName RestrictionDefault RestrictionNumber IsRequired Description The name of the collection to which the specified restrictions apply The name of the restriction in the collection Ignored The actual location in the collection restrictions for this restriction Specifies whether the restriction is required

DataDirect Connect Series for ADO.NET Reference

68

Chapter 5 Getting Schema Information

Additional Schema Metadata Collections


All DataDirect providers support additional MetaData collections. Result sets for each Metadata collection are defined in the section for each collection. However, it is important to note that the DataDirect Connect for ADO.NET data providers will not return exactly the same result set for each Metadata collection. As is standard coding practice in .NET, the data provider returns only the columns of the result set that apply to it. Additionally, it is important to note that a data provider might not implement every Metadata collection. If a collection does not apply to a particular database, then the data provider must not implement that collection. Finally, not all restrictions are the same across providers for each collection. Restrictions that do not apply to a given data source are not implemented for that data provider's Metadata collections (for example, Catalog restrictions are not implemented for the Oracle data provider). Therefore, it is important that applications that use the Metadata collections conform to the following best practices:

Get result column data using the name of the column, not the ordinal, for example, using the DataTable.Columns property rather than getColumn(1). This lets the client program know whether the column exists for a given Metadata collection. Check for the existence of a Metadata collection before calling it. Use the MetaData collections to determine which Collections are supported, for example, by calling GetSchema(DbMetaDataCollectionNames.MetaDataCollections) on the Connection object. This lets the program know whether the given collection exists for the given data provider. Check for the existence of a particular restriction before using it. For example, get the restrictions Metadata collection GetSchema(DbMetaDataCollectionNames.Restrictions).

Catalogs Schema Collection


Description: The Catalogs collection identifies the physical attributes associated with catalogs that are accessible from the DBMS. For some systems, there may be only one catalog. Number of restrictions: 1 Restrictions available: CATALOG_NAME Sort order: CATALOG_NAME NOTE: The Oracle data provider does not support the Catalogs collection.

DataDirect Connect Series for ADO.NET Reference

Additional Schema Metadata Collections

69

Table 5-8. Catalogs Schema Collection Column Name CATALOG_NAME DESCRIPTION .NET Framework DataType1 String String Description Catalog name. Cannot be null. A description of the catalog (if any). If none, an empty string must be returned.

1. All classes are System.XXX. For example, System.String.

Columns Schema Collection


Description: The Columns collection identifies the columns of tables (including views) defined in the catalog that are accessible to a given user. Table 5-9 identifies the columns of tables that are defined in the catalog that are accessible to a given user. Number of restrictions: 3 Restrictions available: TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME Sort order: TABLE_SCHEMA, TABLE_NAME, ORDINAL_POSITION Table 5-9. Columns Schema Collection Column Name CHARACTER_MAXIMUM_ LENGTH .NET Framework DataType1 Int32 Description The maximum possible length of a value in the column. For character, binary, or bit columns, this is one of the following:

The maximum length of the column in characters, bytes, or bits, respectively, if one is defined. The maximum length of the data type in characters, bytes, or bits, respectively, if the column does not have a defined length. Zero (0) if neither the column or the data type has a defined maximum length, or if the column is not a character, binary, or bit column.

CHARACTER_OCTET_LENGTH

Int32

The maximum length in octets (bytes) of the column, if the type of the column is character or binary. A value of zero (0) means the column has no maximum length or that the column is not a character or binary column. Catalog name in which the character set is defined. This column does not exist if the provider does not support catalogs or different character sets.

CHARACTER_SET_CATALOG

String

DataDirect Connect Series for ADO.NET Reference

70

Chapter 5 Getting Schema Information

Table 5-9. Columns Schema Collection (cont.) Column Name CHARACTER_SET_NAME CHARACTER_SET_SCHEMA .NET Framework DataType1 String String Description Character set name. This column does not exist if the provider does not support different character sets. Unqualified schema name in which the character set is defined. This column does not exist if the provider does not support schemas or different character sets. The catalog name in which the collation is defined. This column exists only if the data provider supports catalogs or different collations. Collation name. This column exists only if the provider supports different collations. Unqualified schema name in which the collation is defined. This column exists only if the data provider supports schemas or different collations. Default value of the column. true: The column has a default value. false: The column does not have a default value, or it is unknown whether the column has a default value. COLUMN_NAME String The name of the column; this might not be unique. This column is returned only if the data provider supports catalogs. The indicator of the column's data type. If Xml Describe Type is set to Clob, this should be a System.String. Otherwise, this should be System.Byte. This value cannot be null. DATETIME_PRECISION Int32 Datetime precision (number of digits in the fractional seconds portion) of the column if the column is a datetime. If the column's data type is not datetime, this is DbNull. true: The column might be nullable. false: The column is known not to be nullable. NATIVE_DATA_TYPE String The data source description of the type. This should be BLOB. This value cannot be null. NUMERIC_PRECISION Int32 If the column's data type is of a numeric data, this is the maximum precision of the column. This column is returned only if the data provider supports catalogs.

COLLATION_CATALOG

String

COLLATION_NAME COLLATION_SCHEMA

String String

COLUMN_DEFAULT COLUMN_HASDEFAULT

String Boolean

DATA_TYPE

Object

IS_NULLABLE

Boolean

DataDirect Connect Series for ADO.NET Reference

Additional Schema Metadata Collections

71

Table 5-9. Columns Schema Collection (cont.) Column Name NUMERIC_PRECISION_RADIX .NET Framework DataType1 Int32 Description The radix indicates in which base the values in NUMERIC_PRECISION and NUMERIC_SCALE are expressed. It is only useful to return either 2 or 10. This column is returned only if the data provider supports catalogs. NUMERIC_SCALE Int32 If the column's type is a numeric type that has a scale, this is the number of digits to the right of the decimal point. This column is returned only if the data provider supports catalogs. The ordinal of the column. Columns are numbered starting from one. This column is returned only if the data provider supports catalogs. The data source defined type of the column is mapped to the type enumeration of the data provider. For example, for Oracle, this is the DDTek.Oracle.OracleDbType enumeration. This value cannot be null. PROVIDER_GENERIC_TYPE Int32 The provider-defined type of the column as mapped to the System.Data.DbType enumeration. This value cannot be null. TABLE_NAME TABLE_SCHEMA String String The table name. This column is returned only if the data provider supports catalogs. The unqualified schema name.

ORDINAL_POSITION

Int32

PROVIDER_DEFINED_TYPE

Int32

1. All classes are System.XXX. For example, System.String.

ForeignKeys Schema Collection


Description: The ForeignKeys collection identifies the foreign key columns that are defined in the catalog by a given user. Number of restrictions: 4 Restrictions available: PK_TABLE_SCHEMA, PK_TABLE_NAME, FK_TABLE_SCHEMA, FK_TABLE_NAME Sort order: FK_TABLE_SCHEMA, FK_TABLE_NAME

DataDirect Connect Series for ADO.NET Reference

72

Chapter 5 Getting Schema Information

Table 5-10. ForeignKeys Schema Collection Column Name DEFERRABILITY .NET Framework Datatype1 String Description The deferability of the foreign key. The value is one of the following: INITIALLY DEFERRED INITIALLY IMMEDIATE NOT DEFERRABLE

DELETE_RULE

String

If a delete rule was specified, the value is one of the following: CASCADE: A referential action of CASCADE was specified. SET NULL: A referential action of SET NULL was specified. SET DEFAULT: A referential action of SET DEFAULT was specified. NO ACTION: A referential action of NO ACTION was specified. For some data providers, this column does not exist if they cannot determine the DELETE_RULE. In most cases, this implies a default of NO ACTION.

FK_COLUMN_NAME FK_NAME FK_TABLE_NAME FK_TABLE_SCHEMA

String String String String

Foreign key column name. Foreign key name. This column exists only if the data provider supports named foreign key constraints. Foreign key table name. Unqualified schema name in which the foreign key table is defined. This column exists only if the data provider supports schemas. The order of the column names in the key. For example, a table might contain several foreign key references to another table. The ordinal starts over for each reference; for example, two references to a three-column key would return 1, 2, 3, 1, 2, 3. Primary key column name. Primary key name. This column exists only if the data provider supports named primary key constraints. Primary key table name.

ORDINAL

Int32

PK_COLUMN_NAME PK_NAME PK_TABLE_NAME

String String String

DataDirect Connect Series for ADO.NET Reference

Additional Schema Metadata Collections

73

Table 5-10. ForeignKeys Schema Collection (cont.) Column Name PK_TABLE_SCHEMA .NET Framework Datatype1 String Description Unqualified schema name in which the primary key table is defined. This column exists only if the data provider supports schemas. If an update rule was specified, the value is one of the following: CASCADE: A referential action of CASCADE was specified. SET NULL: A referential action of SET NULL was specified. SET DEFAULT: A referential action of SET DEFAULT was specified. NO ACTION: A referential action of NO ACTION was specified. For some data providers, this column will not exist if they cannot determine the UPDATE_RULE. In most cases, this implies a default of NO ACTION.
1. All classes are System.XXX. For example, System.String

UPDATE_RULE

String

Indexes Schema Collection


Description: The Indexes collection identifies the indexes that are defined in the catalog that are owned by a given user. Number of restrictions: 4 Restrictions available: TABLE_SCHEMA, INDEX_NAME, TYPE, TABLE_NAME Sort order: UNIQUE, TYPE, INDEX_CATALOG, INDEX_SCHEMA, INDEX_NAME, ORDINAL_POSITION

DataDirect Connect Series for ADO.NET Reference

74

Chapter 5 Getting Schema Information

Table 5-11. Indexes Schema Collection Column Name CARDINALITY CLUSTERED .NET Framework DataType1 Int32 Boolean Description The number of unique values in the index. Determines whether an index is clustered. This is one of the following: true: The leaf nodes of the index contain full rows, not bookmarks. This is a way to represent a table clustered by key value. false: The leaf nodes of the index contain bookmarks of the base table rows whose key value matches the key value of the index entry. COLLATION String This is one of the following: ASC: The sort sequence for the column is ascending. DESC: The sort sequence for the column is descending. This column exists only when a column sort sequence is supported. COLUMN_NAME FILL_FACTOR String Int32 The column name. For a B+-tree index, this property represents the storage utilization factor of page nodes during the creation of the index. The value is an integer from 0 to 100, representing the percentage of use of an index node. For a linear hash index, this property represents the storage utilization of the entire hash structure (the ratio of used area to total allocated area) before a file structure expansion occurs. FILTER_CONDITION INDEX_CATALOG INDEX_NAME INDEX_SCHEMA INITIAL_SIZE INTEGRATED String String String String Int32 Boolean The WHERE clause identifying the filtering restriction. The catalog name. This column exists only if the data provider supports catalogs. The index name. The unqualified schema name. This column exists only if the data provider supports schemas. The total amount of bytes allocated to this structure at creation time. Whether the index is integrated, that is, whether all base table columns are available from the index. This is one of the following: true: The index is integrated. For clustered indexes, this value must always be true. false: The index is not integrated.

DataDirect Connect Series for ADO.NET Reference

Additional Schema Metadata Collections

75

Table 5-11. Indexes Schema Collection (cont.) Column Name NULL_COLLATION .NET Framework DataType1 String Description How NULLs are collated in the index. This is one of the following: END: NULLs are collated at the end of the list, regardless of the collation order. START: NULLs are collated at the start of the list, regardless of the collation order. HIGH: NULLs are collated at the high end of the list. LOW: NULLs are collated at the low end of the list. NULLS Int32 Whether NULL keys are allowed. This is one of the following: ALLOWNULL: The index allows entries where the key columns are NULL. DISALLOWNULL: The index does not allow entries where the key columns are NULL. If the consumer attempts to insert an index entry with a NULL key, the data provider returns an error. IGNORENULL: The index does not insert entries containing NULL keys. If the consumer attempts to insert an index entry with a NULL key, the data provider ignores that entry and no error code is returned. IGNOREANYNULL: The index does not insert entries where some column key has a NULL value. For an index having a multicolumn search key, if the consumer inserts an index entry with a NULL value in some column of the search key, the provider ignores that entry and no error code is returned. ORDINAL_POSITION PAGES PRIMARY_KEY TABLE_NAME TABLE_SCHEMA Int32 Int32 Boolean String String The ordinal position of the column in the index, starting with 1. The number of pages that are used to store the index. Determines whether the index represents the primary key on the table. This column does not exist if this is not known. The table name. Unqualified schema name. This column exists only if the data provider supports schemas.

DataDirect Connect Series for ADO.NET Reference

76

Chapter 5 Getting Schema Information

Table 5-11. Indexes Schema Collection (cont.) Column Name TYPE .NET Framework DataType1 String Description The type of the index. This is one of the following: BTREE: The index is a B+-tree. HASH: The index is a hash file using, for example, linear or extensible hashing. CONTENT: The index is a content index. OTHER: The index is some other type of index. UNIQUE Boolean Determines whether index keys must be unique. This is one of the following: true: The index keys must be unique. false: Duplicate keys are allowed.
1. All classes are System.XXX. For example, System.String.

PrimaryKeys Schema Collection


Description: The PrimaryKeys collection identifies the primary key columns that are defined in the catalog by a given user. Number of restrictions: 2 Restrictions available: TABLE_SCHEMA, TABLE_NAME Sort order: TABLE_SCHEMA, TABLE_NAME Table 5-12. PrimaryKeys Schema Collection Column Name COLUMN_NAME ORDINAL PK_NAME TABLE_NAME TABLE_SCHEMA .NET Framework DataType1 String Int32 String String String Description The primary key column name. The order of the column names in the key. The primary key name. The table name. Unqualified schema name in which the table is defined. This column exists only if the data provider supports schemas.

1. All classes are System.XXX. For example, System.String.

DataDirect Connect Series for ADO.NET Reference

Additional Schema Metadata Collections

77

ProcedureParameters Schema Collection


Description: The ProcedureParameters collection returns information about the parameters and return codes of procedures that are part of the Procedures collection. Number of restrictions: 4 Restrictions available: PROCEDURE_CATALOG, PROCEDURE_SCHEMA, PROCEDURE_NAME, PARAMETER_NAME Sort order: PROCEDURE_CATALOG, PROCEDURE_SCHEMA, PROCEDURE_NAME, ORDINAL_POSITION Table 5-13. ProcedureParameters Schema Collection Column Name CHARACTER_MAXIMUM_ LENGTH .NET Framework DataType1 Int32 Description The maximum possible length of a value in the parameter. For character, binary, or bit parameters, this is one of the following:

The maximum length of the parameter in characters, bytes, or bits, respectively, if one is defined. For example, a CHAR(5) parameter has a maximum length of 5. The maximum length of the data type in characters, bytes, or bits, respectively, if the parameter does not have a defined length. Zero (0) if neither the parameter or the data type has a defined maximum length. DbNull for all other types of parameters.

CHARACTER_OCTET_LENGTH

Int32

The maximum length in octets (bytes) of the parameter, if the type of the parameter is character or binary. If the parameter has no maximum length, the value of zero (0). For all other types of parameters, the value is -1.

DATA_TYPE DESCRIPTION

Object String

The indicator of the column's data type. This value cannot be null. The description of the parameter. For example, the description of the Name parameter in a procedure that adds a new employee might be Employee name. true: The parameter might be nullable. false: The parameter is not nullable. The data source description of the type. This value cannot be null.

IS_NULLABLE NATIVE_DATA_TYPE

Boolean String

DataDirect Connect Series for ADO.NET Reference

78

Chapter 5 Getting Schema Information

Table 5-13. ProcedureParameters Schema Collection (cont.) Column Name NUMERIC_PRECISION .NET Framework DataType1 Int32 Description If the column's data type is of a numeric data, this is the maximum precision of the column. If the column's data type is not numeric, this is DbNull. NUMERIC_SCALE Int32 If the column's type is a numeric type that has a scale, this is the number of digits to the right of the decimal point. Otherwise, this is DbNull. ORDINAL_POSITION Int32 If the parameter is an input, input/output, or output parameter, this is the one-based ordinal position of the parameter in the procedure call. If the parameter is the return value, this is DbNull. PARAMETER_DEFAULT String The default value of parameter. If the default value is a NULL, then the PARAMETER_HASDEFAULT column will return true and the PARAMETER_DEFAULT column will not exist. If PARAMETER_HASDEFAULT is set to false, then the PARAMETER_DEFAULT column will not exist. PARAMETER_HASDEFAULT Int32 true: The parameter has a default value. false: The parameter does not have a default value, or it is unknown whether the parameter has a default value. PARAMETER_NAME PARAMETER_TYPE String String The parameter name. DbNull if the parameter is not named. This is one of the following: INPUT: The parameter is an input parameter. INPUTOUTPUT: The parameter is an input/output parameter. OUTPUT: The parameter is an output parameter. RETURNVALUE: The parameter is a procedure return value. UNKNOWN: The parameter type is unknown to the provider. PROCEDURE_CATALOG PROCEDURE_NAME PROCEDURE_SCHEMA String String String The catalog name. This column exists only if the data provider supports catalogs. The procedure name. The catalog name. This column exists only if the data provider supports schemas.

DataDirect Connect Series for ADO.NET Reference

Additional Schema Metadata Collections

79

Table 5-13. ProcedureParameters Schema Collection (cont.) Column Name PROVIDER_DEFINED_TYPE .NET Framework DataType1 Int32 Description The data source defined type of the column as mapped to the type enumeration of the data provider. For example, for the Oracle data provider, this is the DDtek.Oracle.OracleDbType enumeration. This value cannot be null. PROVIDER_GENERIC_TYPE Int32 The data source defined type of the column as mapped to the System.Data.DbType enumeration. This value cannot be null.
1. All classes are System.XXX. For example, System.String.

Procedures Schema Collection


Description: The Procedures schema collection identifies the procedures that are defined in the catalog. When possible, only procedures for which the connected user has execute permission should be returned. Number of restrictions: 4 Restrictions available: PROCEDURE_CATALOG, PROCEDURE_SCHEMA, PROCEDURE_NAME, PROCEDURE_TYPE Sort order: PROCEDURE_CATALOG, PROCEDURE_SCHEMA, PROCEDURE_NAME Table 5-14. Procedures Schema Collection Column Name DESCRIPTION PROCEDURE_CATALOG PROCEDURE_DEFINITION PROCEDURE_NAME .NET Framework DataType1 String String String String Description A description of the procedure. If not available, the provider returns DbNull. The catalog name. This column exists only if the data provider supports catalogs. The procedure definition, or return DbNull if the data provider does not have this information available. The procedure name.

DataDirect Connect Series for ADO.NET Reference

80

Chapter 5 Getting Schema Information

Table 5-14. Procedures Schema Collection (cont.) Column Name PROCEDURE_SCHEMA PROCEDURE_TYPE .NET Framework DataType1 String String Description Unqualified schema name. This column exists only if the data provider supports schemas. This is one of the following: UNKNOWN: It is not known whether there is a returned value. PROCEDURE: Procedure; there is no returned value. FUNCTION: Function; there is a returned value.
1. All classes are System.XXX. For example, System.String.

Schemata Schema Collection


Description: The Schemata collection identifies the schemas that are owned by a given user. Number of restrictions: 3 Restrictions Available: CATALOG_NAME, SCHEMA_NAME, SCHEMA_OWNER Sort order: CATALOG_NAME, SCHEMA_NAME, SCHEMA_OWNER Table 5-15. Schemata Schema Collection Column Name CATALOG_NAME DEFAULT_CHARACTER_SET _ CATALOG DEFAULT_CHARACTER_SET _ NAME DEFAULT_CHARACTER_SET _ SCHEMA .NET Framework DataType1 String String Description The catalog name. This column exists only if the data provider supports catalogs. The catalog name of the default character set for columns and domains in the schemas. This column exists only if the data provider supports catalogs. The default character set name. This column exists only if the data provider supports different character sets. The unqualified schema name of the default character set for columns and domains in the schemas. This column exists only if the data provider supports different character sets. The unqualified schema name. The user that owns the schemas.

String String

SCHEMA_NAME SCHEMA_OWNER

String String

1. All classes are System.XXX. For example, System.String

DataDirect Connect Series for ADO.NET Reference

Additional Schema Metadata Collections

81

Tables Schema Collection


Number of Restrictions: 3 Restrictions Available: TABLE_SCHEMA, TABLE_NAME, TABLE_TYPE Sort order: TABLE_TYPE, TABLE_SCHEMA, TABLE_NAME Description: The Tables collection identifies the tables (including views) that are defined in the catalog that are accessible to a given user. NOTE: For the .NET Framework datatypes, all classes are System.xxx. Table 5-16. Tables Schema Collection Column Name TABLE_SCHEMA Type Indicator String Description The unqualified schema name in which the table is defined. This column exists only if the data provider supports schemas. Table name. The table type. One of the following or a provider-specific value:

TABLE_NAME TABLE_TYPE

String String

ALIAS TABLE SYNONYM SYSTEM TABLE VIEW GLOBAL TEMPORARY LOCAL TEMPORARY SYSTEM VIEW

This column cannot contain an empty string. DESCRIPTION String A description of the table. DbNull if no description is associated with the column.

DataDirect Connect Series for ADO.NET Reference

82

Chapter 5 Getting Schema Information

TablePrivileges Schema Collection


Description: The TablePrivileges schema collection identifies the privileges on tables that are defined in the catalog that are available to or granted by a given user. Number of restrictions: 4 Restrictions available: TABLE_SCHEMA, TABLE_NAME, GRANTOR, GRANTEE Sort order: TABLE_SCHEMA, TABLE_NAME, PRIVILEGE_TYPE Table 5-17. TablePrivileges Schema Collection Column Name GRANTEE GRANTOR IS_GRANTABLE Type Indicator1 String String Boolean Description The user name (or PUBLIC) to whom the privilege has been granted. The user who granted the privileges on the table in TABLE_NAME. true: The privilege being described was granted with the WITH GRANT OPTION clause. false: The privilege being described was not granted with the WITH GRANT OPTION clause. PRIVILEGE_TYPE String The privilege type. This is one of the following: SELECT DELETE INSERT UPDATE REFERENCES

TABLE_NAME TABLE_SCHEMA

String String

The table name. The unqualified schema name in which the table is defined. This column exists only if the data provider supports schemas.

1. All classes are System.XXX. For example, System.String.

DataDirect Connect Series for ADO.NET Reference

Additional Schema Metadata Collections

83

Views Schema Collection


Description: The Views collection identifies the views that are defined in the catalog and that are accessible to a given user. Number of restrictions: 2 Restrictions available: TABLE_SCHEMA, TABLE_NAME Sort order: TABLE_SCHEMA, TABLE_NAME Table 5-18. Views Schema Collection Column Name CHECK_OPTION Type Indicator1 Boolean Description A check option. This is one of the following: true: Local update checking only. false: Cascaded update checking (this has the same effect as not specifying a CHECK OPTION on the view definition). DATE_CREATED DATE_MODIFIED DESCRIPTION IS_UPDATABLE TABLE_NAME TABLE_SCHEMA String String String Boolean String String The date when the view was created or DbNull if the data provider does not have this information. Date when the view definition was last modified or DbNull if the data provider does not have this information. A description of the view. true: The view can be updated. false: The view cannot be updated. The table name. The unqualified schema name in which the table is defined. This column exists only if the data provider supports schemas. The view definition. This is a query expression.

VIEW _DEFINITION

String

1. All classes are System.XXX. For example, System.String

DataDirect Connect Series for ADO.NET Reference

84

Chapter 5 Getting Schema Information

DataDirect Connect Series for ADO.NET Reference

85

Client Information for Connections


Many databases allow applications to store client information that is associated with a connection. For example, the following types of information can be useful for database administration and monitoring purposes:

Name of the application currently using the connection. User ID for whom the application using the connection is performing work. The user ID may be different than the user ID that was used to establish the connection. Host name of the client on which the application using the connection is running. Product name and version of the driver on the client. Additional information that may be used for accounting or troubleshooting purposes, such as an accounting ID.

For DB2 V9.5 for Linux/UNIX/Windows and DB2 for z/OS, this information can feed directly into the Workload Manager (WLM) for workload management and monitoring purposes. See "DB2 Workload Manager (WLM) Attributes" on page 87 for more information about using the WLM.

How Databases Store Client Information


Typically, databases that support storing client information do so by providing a register, a variable, or a column in a system table in which the information is stored. If an application attempts to store information and the database does not provide a mechanism for storing that information, the data provider caches the information locally. Similarly, if an application returns client information and the database does not provide a mechanism for storing that information, the data provider returns the locally cached value. For example, lets assume that the following code returns a pooled connection to a DB2 V9.1 for Linux/UNIX/Windows database and sets a client application name for that connection. In this example, the application sets the application name SALES157 using the data provider Application Name connection string option. // Get Database Connection Db2Connection conn = new DB2Connection("...;Application Name=SALES157;"); conn.Open(); // // do something // conn.Close(); SALES157 is stored by the DB2 database in the CURRENT CLIENT_APPLNAME register, the location that DB2 reserves for this information. When the connection to the database is closed, the connection is returned to the connection pool as usual and the client information on the connection is reset to an empty string.

DataDirect Connect Series for ADO.NET Reference

86

Chapter 6 Client Information for Connections

Storing Client Information


Your application can store client information associated with a connection using the data provider connection string options listed in Table 6-1. Refer to the specific data provider chapters in the DataDirect Connect Series for ADO.NET Users Guide for a description of each connection string option. Table 6-1 shows the connection string options your application can use to store client information and where that client information is stored for each database. Table 6-1. Database Locations for Storing Client Information Connection String Option AccountingInfo Description Additional information that may be used for accounting or troubleshooting purposes, such as an accounting ID Database DB2 Location CURRENT CLIENT_ACCTNG register (DB2 for Linux/UNIX/Windows) or CLIENT ACCTNG register (DB2 for z/OS and DB2 for iSeries) CLIENT_INFO value in the V$SESSION table Local cache Local cache CURRENT CLIENT_APPLNAME register (DB2 for Linux/UNIX/Windows) or CLIENT APPLNAME register (DB2 for z/OS and DB2 for iSeries). For DB2 V9.1 and higher for Linux/UNIX/Windows, this value is also stored in the APPL_NAME value in the SYSIBMADM.APPLICATIONS table. CLIENT_IDENTIFIER attribute. In addition, this value is also stored in the PROGRAM value in the V$SESSION table. Program_name value in the sysprocesses table clientapplname and program_name value in sysprocesses table CURRENT CLIENT_WRKSTNNAME register (DB2 for Linux/UNIX/Windows) or CLIENT WRKSTNNAME register (DB2 for z/OS and DB2 for iSeries) MACHINE value in the V$SESSION table Hostname value in the sysprocesses table clienthostname and hostname value in sysprocesses table

Oracle Microsoft SQL Server Sybase DB2

Application Name

Name of the application that is currently using the connection

Oracle

Microsoft SQL Server Sybase Client Host Name Host name of the client on which the application using the connection is running DB2

Oracle Microsoft SQL Server Sybase

DataDirect Connect Series for ADO.NET Reference

DB2 Workload Manager (WLM) Attributes

87

Table 6-1. Database Locations for Storing Client Information (cont.) Connection String Option Client User Description User ID for whom the application using the connection is performing work Database DB2 Location CURRENT CLIENT_USERID register (DB2 for Linux/UNIX/Windows) or CLIENT USERID register (DB2 for z/OS and DB2 for iSeries) OSUSER value in the V$SESSION table Local cache clientname value in sysprocesses table CLIENT_PRDID value. For DB2 V9.1 and higher for Linux/UNIX/Windows, the CLIENT_PRDID value is located in the SYSIBMADM.APPLICATIONS table. PROCESS value in the V$SESSION table The hostprocess value in the sysprocesses table The hostprocess value in the sysprocesses table

Oracle Microsoft SQL Server Sybase

Program ID

Product name and version of the driver on the client

DB2

Oracle Microsoft SQL Server Sybase

DB2 Workload Manager (WLM) Attributes


The Workload Manager (WLM) is a priority and resource manager within DB2 V9.5 for Linux/UNIX/Windows. On z/OS, the WLM is part of the operating system. When you set client information using the data providers, the WLM can access and use that information for workload management and monitoring purposes as described in "Workload Manager (WLM)" in Chapter 5 of the DataDirect Connect Series for ADO.NET Users Guide.

DB2 V9.5 for Linux/UNIX/Windows


Table 6-2 lists the WLM attributes for DB2 V9.5 for Linux/UNIX/Windows that map to information that is set by data provider connection string options. Refer to your DB2 documentation for information about using these WLM attributes. Table 6-2. WLM Attributes for DB2 V9.5 for Linux/UNIX/Windows WLM Attribute APPLNAME CURRENT CLIENT_APPLNAME Connection String Option Application Name Program ID Description Name of the application that is currently using the connection Product name and version of the driver on the client

DataDirect Connect Series for ADO.NET Reference

88

Chapter 6 Client Information for Connections

Table 6-2. WLM Attributes for DB2 V9.5 for Linux/UNIX/Windows (cont.) WLM Attribute CURRENT CLIENT_USERID CURRENT CLIENT_WRKSTNNAME Connection String Option Client User Client Host Name Description User ID for whom the application using the connection is performing work Host name of the client on which the application using the connection is running

DB2 for z/OS


Table 6-3 lists the WLM attributes for DB2 for z/OS that map to information that is set by data provider connection string options. Refer to your DB2 documentation for information about using these WLM attributes. Table 6-3. WLM Attributes for DB2 z/OS WLM Attribute Correlation Info (CI) Collection Name (CN) Process Name (PC) Userid (UI) Driver Property Program ID Package Collection Application Name Client User Description Product name and version of the driver on the client Name of the collection or library (group of packages) to which DB2 packages are bound Name of the application that is currently using the connection User ID for whom the application using the connection is performing work

DataDirect Connect Series for ADO.NET Reference

89

Designing .NET Applications for Performance Optimization


Developing performance-oriented .NET applications is not easy. The .NET standard includes only basic guidelines and interface definitions to help programmers develop .NET applications. The ADO.NET data providers do not automatically throw exceptions to say that your code is running too slowly. Designing a .NET application is a complex process, in part because the code can be very data provider-specific. If you are working with several databases, you will find that the programming concepts vary between the different data providers. You will need much more database knowledge to design your application effectively. This chapter presents guidelines compiled by examining the .NET implementations of shipping .NET applications. The guidelines discuss selecting .NET objects and methods, designing .NET applications, retrieving data, and updating data. These guidelines include:

Retrieving only required data Selecting objects and methods that optimize performance Managing connections and updates

Following these general rules will help you solve some common .NET system performance problems, such as those listed in the following table: Problem Network communication is slow. Solution Reduce network traffic. See guidelines in "Retrieving Data" on page 97. Evaluation of complex SQL queries on the database is slow and can reduce concurrency. Excessive calls from the application to the data provider slow performance. Simplify queries. See guidelines in "Simplifying Automatically-generated SQL Queries" on page 90. Optimize application-to-data provider interaction. See guidelines in "Retrieving Data" on page 97. Disk input/output is slow. Limit disk input/output. See guidelines in "Using Connection Pooling" on page 91.

DataDirect Connect Series for ADO.NET Reference

90

Chapter 7 Designing .NET Applications for Performance Optimization

Simplifying Automatically-generated SQL Queries


The guidelines in this section help you to optimize system performance by editing or eliminating complex automatically-generated SQL queries.

Reviewing SQL Queries Created by Visual Studio Wizards


Tools such as the Add Table Wizard and Add View Wizard are convenient to use. You can quickly select the tables, columns, rows, and fields that you want to include in your query. However, this can result in a complex query, as shown in the following figure. Always review the generated SQL to make sure that the syntax is efficient and that the query contains only essential components.

DataDirect Connect Series for ADO.NET Reference

Designing .NET Applications

91

Avoiding the CommandBuilder Object


It is tempting to use a CommandBuilder object because it generates SQL statements and can save the developer time when coding a new application that uses DataSets. However, this shortcut can have a negative effect on performance. Because of concurrency restrictions, the Command Builder can generate highly inefficient SQL statements. For example, suppose you have a table called emp, an 8-column table with simple employee records. A CommandBuilder would generate the following Update statement, which checks all values for concurrency restrictions: CommandText: "UPDATE emp SET empno = ?, ename = ?, job = ?, mgr = ?, hiredate = ?, sal = ?, comm = ?, dept = ? WHERE ( (empno = ?) AND (ename = ?) AND (job = ?) AND ((mgr IS NULL AND ? IS NULL) OR (mgr = ?)) AND (hiredate = ?) AND (sal = ?) AND ((comm IS NULL AND ? IS NULL) OR (comm = ?)) AND (dept = ?) )" The end user can often write much more efficient Update and Delete statements than those that the CommandBuilder generates. For example, a programmer who knows the underlying database schema and that the empno column of the emp table is the primary key for the table, can code the same Update statement as follows: UPDATE emp SET empno = ?, ename = ?, job = ?, mgr = ?, hiredate = ?, sal = ?, comm = ?, dept = ? WHERE empno = ? This statement runs much more efficiently on the database server than the statement that is generated by the CommandBuilder, but loses the additional concurrency control. Another drawback is also implicit in the design of the CommandBuilder object. The CommandBuilder must generate statements at runtime. Each time a DataAdapter.Update method is called, the CommandBuilder must analyze the contents of the result set and generate Update, Insert, and Delete statements for the DataAdapter. When the programmer explicitly specifies the Update, Insert, and Delete statements for the DataAdapter, this extra processing time is avoided.

Designing .NET Applications


The guidelines in this section will help you to optimize system performance when designing .NET applications.

Using Connection Pooling


Connecting to a database is the single slowest operation inside a data-centric application. That's why connection management is important to application performance. Optimize your application by connecting once and using multiple statement objects, instead of performing multiple connections. Avoid connecting to a data source after establishing an initial connection. Connection pooling lets you reuse connections. Closing connections does not close the physical connection to the database. When an application requests a connection, an active connection is reused, thus avoiding the network I/O needed to create a new connection.

DataDirect Connect Series for ADO.NET Reference

92

Chapter 7 Designing .NET Applications for Performance Optimization Connection pooling in ADO.NET is not provided by the core components of the .NET Framework. It must be implemented in the ADO.NET data provider itself. Pre-allocate connections. Decide which connection strings you will need to meet your needs. Remember that each unique connection string creates a new connection pool. Once created, connection pools are not destroyed until the active process ends or the connection lifetime is exceeded. Maintenance of inactive or empty pools involves minimal system overhead. Connection and statement handling should be addressed before implementation. Spending time and thoughtfully handling connection management improves application performance and maintainability.

Opening and Closing Connections


Open connections just before they are needed. Opening them earlier than necessary decreases the number of connections available to other users and can increase the demand for resources. To keep resources available, explicitly Close the connection as soon as it is no longer needed. If you wait for the garbage collector to implicitly clean up connections that go out of scope, the connections will not be returned to the connection pool immediately, tieing up resources that are not actually being used. Close connections inside a finally block. Code in the finally block always runs, even if an exception occurs. This guarantees explicit closing of connections. For example: try { DBConn.Open(); // Do some other interesting work } catch (Exception ex) { // Handle exceptions } finally { // Close the connection if (DBConn != null) DBConn.Close(); } If you are using connection pooling, opening and closing connections is not an expensive operation. Using the Close() method of the data provider's Connection object adds or returns the connection to the connection pool. Remember, however, that closing a connection automatically closes all DataReader objects that are associated with the connection.

DataDirect Connect Series for ADO.NET Reference

Designing .NET Applications

93

Implementing Reauthentication
Typically, you can configure a connection pool to provide scalability for connections. In addition, to help minimize the number of connections required in a connection pool, you can switch the user associated with a connection to another user, a process known as reauthentication. For example, suppose you are using Kerberos authentication to authenticate users using their operating system user name and password. To reduce the number of connections that must be created and managed, you may want to switch the user associated with a connection to multiple users using reauthentication. For example, suppose your connection pool contains a connection, Conn, which was established using the user ALLUSERS. You can have that connection service multiple users, User A, B, C, and so on, by switching the user associated with the connection Conn to User A, B, C, and so on. For more information about the data providers support for reauthentication, refer to the DataDirect Connect for ADO.NET Users Guide.

Managing Commits in Transactions


Committing transactions is slow due to the result of disk input/output and, potentially, network input/output. Always start a transaction after connecting; otherwise, you are in autocommit mode. What does a commit actually involve? The database server must flush back to disk every data page that contains updated or new data. This is usually a sequential write to a journal file, but nonetheless, is a disk input/output. By default, Autocommit is on when connecting to a data source. Autocommit mode usually impairs performance because of the significant amount of disk input/output that is needed to commit every operation. Furthermore, some database servers do not provide an autocommit mode natively. For this type of server, the ADO.NET data provider must explicitly issue a Commit statement and a BeginTransaction for every operation sent to the server. In addition to the large amount of disk input/output that is required to support autocommit mode, a performance penalty is paid for up to three network requests for every statement that is issued by an application. The following code fragment starts a transaction for Oracle: OracleConnection MyConn = new OracleConnection("Connection String info"); MyConn.Open() // Start a transaction OracleTransaction TransId = MyConn.BeginTransaction(); // Enlist a command in the current transaction OracleCommand OracleToDS = new OracleCommand(); OracleToDS.Transaction = TransId; ... // Continue on and do more useful work in the // transaction

DataDirect Connect Series for ADO.NET Reference

94

Chapter 7 Designing .NET Applications for Performance Optimization Although using transactions can help application performance, do not take this tip too far. Leaving transactions active can reduce throughput by holding locks on rows for long times, preventing other users from accessing the rows. Commit transactions in intervals that allow maximum concurrency.

Choosing the Right Transaction Model


Many systems support distributed transactions; that is, transactions that span multiple connections. Distributed transactions are at least four times slower than normal transactions due to the logging and network input/out that is needed to communicate between all the components involved in the distributed transaction (the ADO.NET data provider, the transaction monitor, and the database system). Use distributed transactions used only when transactions must span multiple databases or multiple servers. Unless they are required, avoid using distributed transactions. Instead, use local transactions whenever possible.

Using Commands Multiple Times


Choosing whether to use the Command.Prepare method can have a significant positive (or negative) effect on query execution performance. The Command.Prepare method tells the underlying data provider to optimize for multiple executions of statements that use parameter markers. Note that it is possible to Prepare any command regardless of which execution method is used (ExecuteReader, ExecuteNonQuery, or ExecuteScalar). Consider the case where an ADO.NET data provider implements Command.Prepare by creating a stored procedure on the server that contains the prepared statement. Creating stored procedures involves substantial overhead, but the statement can be executed multiple times. Although creating stored procedures is performance-expensive, execution of that statement is minimized because the query is parsed and optimization paths are stored at create procedure time. Applications that execute the same statement multiples times can benefit greatly from calling Command.Prepare and then executing that command multiple times. However, using Command.Prepare for a statement that is executed only once results in unnecessary overhead. Furthermore, applications that use Command.Prepare for large single execution query batches exhibit poor performance. Similarly, applications that either always use Command.Prepare or never use Command.Prepare do not perform as well as those that use a logical combination of prepared and unprepared statements.

Using Statement Caching


A statement cache is a group of prepared statements or instances of Command objects that can be reused by an application. Using statement caching can improve application performance because the actions on the prepared statement are performed once even though the statement is reused multiple times over an applications lifetime. A statement cache is owned by a physical connection. After being executed, a prepared statement is placed in the statement cache and remains there until the connection is closed.

DataDirect Connect Series for ADO.NET Reference

Designing .NET Applications Caching all of the prepared statements that an application uses might appear to offer increased performance. However, this approach may come at a cost of database memory if you implement statement caching with connection pooling. In this case, each pooled connection has its own statement cache that may contain all of the prepared statements that are used by the application. All of these pooled prepared statements are also maintained in the databases memory. See Chapter 3 Using Your Data Provider with the ADO.NET Entity Framework on page 25 for application programming contexts that use the ADO.NET Entity Framework.

95

Using Parameter Markers as Arguments to Stored Procedures


When calling stored procedures, always use parameter markers for the argument markers instead of using literal arguments. ADO.NET data providers can call stored procedures on the database server either by executing the procedure the same way as any other SQL query, or by optimizing the execution by invoking a Remote Procedure Call (RPC) directly into the database server. When you execute the stored procedure as a SQL query, the database server parses the statement, validates the argument types, and converts the arguments into the correct data types. Remember that SQL is always sent to the database server as a character string, for example, "getCustName (12345)". In this case, even though the application programmer might assume that the only argument to getCustName is an integer, the argument is actually passed inside a character string to the server. The database server parses the SQL query, consults database metadata to determine the parameter contract of the procedure, isolates the single argument value 12345, then converts the string '12345' into an integer value before finally executing the procedure as a SQL language event. Invoking an RPC inside the database server avoids the overhead of using a SQL character string. Instead, an ADO.NET data provider constructs a network packet that contains the parameters in their native data type formats, and executes the procedure remotely. To use stored procedures correctly, set the CommandText property of the Command object to the name of the stored procedure. Then, set the CommandType property of the command to StoredProcedure. Finally, pass the arguments to the stored procedure using parameter objects. Do not physically code the literal arguments into the CommandText. Example 1 SybaseCommand DBCmd = new SybaseCommand("getCustName", Conn); SybaseDataReader = myDataReader; myDataReader = DBCmd.ExecuteReader(); In this example, the stored procedure cannot be optimized to use a server-side RPC. The database server must treat the SQL request as a normal language event which includes parsing the statement, validating the argument types, and converting the arguments into the correct data types before executing the procedure.

DataDirect Connect Series for ADO.NET Reference

96

Chapter 7 Designing .NET Applications for Performance Optimization Example 2 SybaseCommand DBCmd = new SybaseCommand("getCustName", Conn); DBCmd.Parameters.Add("param1",SybaseDbType.Int,10,"").Value = 12345 myDataReader.CommandType = CommandType.StoredProcedure; myDataReader = DBCmd.ExecuteReader(); In this example, the stored procedure can be optimized to use a server-side RPC. Because the application avoids literal arguments and calls the procedure by specifying all arguments as parameters, the ADO.NET data provider can optimize the execution by invoking the stored procedure directly inside the database as an RPC. This example avoids SQL language processing on the database server and the execution time is greatly improved.

Choosing Between a DataSet and a DataReader


A critical choice when designing your application is whether to use a DataSet or a DataReader. If you need to retrieve many records rapidly, use a DataReader. The DataReader object is fast, returning a fire hose of read-only data from the server, one record at a time. In addition, retrieving results with a DataReader requires significantly less memory than creating a DataSet. The DataReader does not allow random fetching, nor does it allow for updating the data. However, ADO.NET data providers optimize their DataReaders for efficiently fetching large amounts of data. In contrast, the DataSet object is a cache of disconnected data stored in memory on the client. In effect, it is a small database in itself. Because the DataSet contains all of the data that has been retrieved, you have more options in the way you can process the data. You can randomly choose records from within the DataSet and update, insert, and delete records at will. You can also manipulate relational data as XML. This flexibility provides some impressive functionality for any application, but comes with a high cost in memory consumption. In addition to keeping the entire result set in memory, the DataSet maintains both the original and the changed data, which leads to even higher memory usage. Do not use DataSets with very large result sets because the scalability of the application will be drastically reduced.

Using Native Managed Providers


Bridges into unmanaged code, that is, code outside the .NET environment, adversely affect performance. Calling unmanaged code from managed code causes the CLR (Common Language Runtime) to make additional checks on calls to the unmanaged code, which impacts performance. The .NET CLR is a very efficient and highly tuned environment. By using 100% managed code so that your .NET assemblies run inside the CLR, you can take advantage of the numerous built-in services to enhance the performance of your managed application and your staff. The CLR provides automatic memory management, so developers don't have to spend time debugging memory leaks. Automatic lifetime control of objects includes garbage collection, scalability features, and support for side-by-side versions. In addition, the .NET Framework security enforces security restrictions on managed code that protects the code and data from being misused or damaged by other code. An administrator can define a security policy to grant or revoke permissions on an enterprise, a machine, an assembly, or a user level.

DataDirect Connect Series for ADO.NET Reference

Retrieving Data However, many ADO.NET data provider architectures must bridge outside the CLR into native code to establish network communication with the database server. The overhead and processing that is required to enter this bridge is slow in the current version of the CLR. Depending on your architecture, you may not realize that the underlying ADO.NET data provider is incurring this security risk and performance penalty. Be careful when choosing an ADO.NET data provider that advertises itself as a 100% or pure managed code data provider. If the "Managed Data Provider" requires unmanaged database clients or other unmanaged pieces, then it is not a 100% managed data access solution. Only a very few vendors produce true managed code providers that implement their entire stack as a managed component.

97

Retrieving Data
To retrieve data efficiently, return only the data that you need, and choose the most efficient method of doing so. The guidelines in this section will help you to optimize system performance when retrieving data with .NET applications.

Retrieving Long Data


Unless it is necessary, applications should not request long data because retrieving long data across a network is slow and resource-intensive. Remember that when you use a DataSet, all data is retrieved from the data source, even if you never use it. Although the best method is to exclude long data from the select list, some applications do not formulate the select list before sending the query to the ADO.NET data providers (that is, some applications send SELECT * FROM table name ...). If the select list contains long data, most ADO.NET data providers must retrieve that data at fetch time even if the application does not bind the long data in the result set. When possible, try to implement a method that does not retrieve all columns of the table. Most users don't want to see long data. If the user does want to see these result items, then the application can query the database again, specifying only the long columns in the select list. This method allows the average user to retrieve the result set without having to pay a high performance penalty for network traffic. Consider a query such as "SELECT * FROM Employee WHERE ssid = '999-99-2222'". An application might only want to retrieve this employee's name and address. But, remember that an ADO.NET data provider cannot tell which result columns an application might be trying to retrieve when the query is executed. A data provider only knows that an application can request any of the result columns. When the ADO.NET data provider processes the fetch request, it will most likely return at least one, if not more, result rows across the network from the database server. In this case, a result row will contain all the column values for each row including an employee picture if the Employee table happens to contain such a column. Limiting the select list to contain only the name and address columns results in decreased network traffic and a faster performing query at runtime.

DataDirect Connect Series for ADO.NET Reference

98

Chapter 7 Designing .NET Applications for Performance Optimization

Reducing the Size of Data Retrieved


To reduce network traffic and improve performance, you can reduce the size of any data being retrieved to some manageable limit by using a database-specific command. For example, an Oracle data provider might let you limit the number of bytes of data the connection uses to fetch multiple rows. A Sybase data provider might let you limit the number of bytes of data that can be returned from a single IMAGE column in a result set. For example, with Sybase, you can issue "Set TEXTSIZE n" on any connection, where n sets the maximum number of bytes that will ever be returned to you from any TEXT or IMAGE column. If the data provider allows you to define the packet size, use the smallest packet size that meets your needs. In addition, be careful to return only the rows you need. If you return five rows when you only need two rows, performance is decreased, especially if the unnecessary rows include long data. Especially when using a DataSet, be sure to use a Where clause with every Select statement to limit the amount of data that will be retrieved. Even when you use a Where clause, a Select statement that does not adequately restrict the request could return hundreds of rows of data. For example, if you want the complete row of data from the Employee table for each manger hired in recent years, you might be tempted to issue the following statement and then, in your application code, filter out the rows who are not managers: SELECT * FROM Employee WHERE hiredate > 2000 However, suppose the Employee table contains a photograph column. Retrieving all the extra rows could be extremely expensive. Let the database filter them for you and avoid having all the extra data that you don't need sent across the network. A better request further limits the data returned and improves performance: SELECT * FROM Employee WHERE hiredate > 2000 AND job_title='Manager'

Using Commands that Retrieve Little or No Data


Commands such as Update, Insert, and Delete do not return data. Use these commands with ExecuteNonQuery method of the Command object. Although you can successfully execute these commands using the ExecuteReader method, the ADO.NET data provider will properly optimize the database access for IUpdate, Insert, and Delete statements only through the ExecuteNonQuery method. The following example shows how to insert a row into the Employee table using ExecuteNonQuery: DBConn.Open(); DBTxn = DBConn.BeginTransaction(); // Set the Connection property of the Command object DBCmd.Connection = DBConn; // Set the text of the Command to the INSERT statement DBCmd.CommandText = "INSERT INTO Employee VALUES (15,'HAYES','ADMIN',6, " + "'17-APR-2002',18000,NULL,4)"; // Set the transaction property of the Command object

DataDirect Connect Series for ADO.NET Reference

Retrieving Data DBCmd.Transaction = DBTxn; // Execute the statement with ExecuteNonQuery, because we are not // returning results DBCmd.ExecuteNonQuery(); // Now commit the transaction DBTxn.Commit(); // Close the connection DBConn.Close(); Use the ExecuteScalar method of the Command object to return a single value, such as a sum or a count, from the database. The ExecuteScalar method returns only the value of the first column of the first row of the result set. Once again, you could use the ExecuteReader method to successfully execute such queries, but by using the ExecuteScalar method, you tell the ADO.NET data provider to optimize for a result set that consists of a single row and a single column. By doing so, the data provider can avoid a lot of overhead and improve performance. The following example shows how to retrieve the count of a group: // Retrieve the number of employees who make more than $50000 // from the Employee table // Open connection to Sybase database SybaseConnection Conn; Conn = new SybaseConnection("host=bowhead;port=4100;User ID=test01; Password=test01;Database Name=Accounting"); Conn.Open(); // Make a command object SybaseCommand salCmd = new SybaseCommand("SELECT COUNT(sal) FROM Employee" + "WHERE sal>'50000'",Conn); try { int count = (int)salCmd.ExecuteScalar(); } catch (Exception ex) { // Display any exceptions in a messagebox MessageBox.Show (ex.Message); } // Close the connection Conn.Close();

99

Choosing the Right Data Type


Advances in processor technology brought significant improvements to the way that operations such as floating-point math are handled. However, when the active portion of your application does not fit into on-chip cache, retrieving and sending certain data types is still expensive. When you are working with data on a large scale, it is important to select the data type that can be processed most efficiently. For example, integer data is processed faster than decimal data. Decimal data is defined according to internal database-specific formats. The data must be decoded and then converted, typically to a string. Note that all Oracle numeric types are actually decimals.

DataDirect Connect Series for ADO.NET Reference

100

Chapter 7 Designing .NET Applications for Performance Optimization Processing time is longest for character strings, followed by integers, which usually require some conversion or byte ordering.

Updating Data
This section provides general guidelines to help you optimize system performance when updating data in databases.

Synchronizing Changes Back to the Data Source


The following example shows the application flow for updating a DataSet using Oracles Rowid as the update mechanism: // Create the DataAdapter and DataSets OracleCommand DbCmd = new OracleCommand ("SELECT rowid, deptid, deptname FROM department", DBConn); myDataAdapter = new OracleDataAdapter(); myDataAdapter.SelectCommand = DBCmd; myDataAdapter.Fill(myDataSet, "Departments"); // Build the Update rules // Specify how to update data in the data set myDataAdapter.UpdateCommand = new OracleCommand("UPDATE department SET deptname = ? ", deptid = ? " + "WHERE rowid =?", DBConn); // Bind parameters myDataAdapter.UpdateCommand.Parameters.Add ("param1", OracleDbType.VarChar,100,"deptname"); myDataAdapter.UpdateCommand.Parameters.Add("param2", OracleDbType.Number,4,"deptid"; myDataAdapter.UpdateCommand.Parameters.Add("param3", OracleDbType.Number,4,"rowid"); In this example, performance of the queries on the Oracle server improves because the Where clause includes only the rowid as a search condition.

DataDirect Connect Series for ADO.NET Reference

101

Using ClickOnce Deployment


ClickOnce deployment lets you package Windows Forms or console applications so that they can be distributed with a minimum of work. Like browser-based application deployment, ClickOnce deployment lets clients download the assemblies they need from a Web page, a network file share, or from stored media, such as a CD-ROM. If the application is defined as self-updating, when the client accesses the application, the application checks the server to find out whether any assemblies have been updated. Any new assemblies are downloaded to the download cache on the client, refreshing the application without any interaction with the end user. NOTE: ClickOnce deployment is available only with Visual Studio 2005 and higher, and the .NET Framework Versions 2.0, 3.0, 3.5, and 3.5 SP1. Earlier versions of the .NET Framework used No-Touch Deployment. Because DataDirect Connect for ADO.NET data providers are built from 100% managed code, you can use ClickOnce deployment effectively to deploy the data provider with your Windows Forms application. Following deployment, .NET security requirements necessitate an initial configuration step on the client. Future updates and changes to the application are delivered to the client by the Web server. NOTES:

Deploying the Windows Forms application with a DataDirect Connect for ADO.NET data provider requires that each client has installed either the Microsoft .NET Framework or the Microsoft .NET Framework Redistributable 2.0 or higher, which is available for download on the Microsoft Web site. Distributed transactions are not supported with data providers that are deployed with No-Touch Deployment.

Deploying the Data Provider with Your Application


To deploy the application on a Web server so that it can be executed through a Web browser on clients, you must embed the DataDirect Connect for ADO.NET license file into the application. To embed the license file using Visual Studio: 1 2 3 Right-click the application project in the Solution Explorer and select Add Existing Item. Browse to the DataDirect license file (DDTek.lic), and click Add to add it to the project. Right-click DDTek.lic and select Properties.

DataDirect Connect Series for ADO.NET Reference

102

Chapter 8 Using ClickOnce Deployment 4 In the Build Action drop-down list, select Embedded Resource.

To specify a target CPU for the application, continue to Step 5. Otherwise, skip to Step 7. 5 Optionally, select Configuration Manager from the Solutions Manager drop-down list to specify a target CPU. The Configuration Manager dialog box appears.

DataDirect Connect Series for ADO.NET Reference

Deploying the Data Provider with Your Application 6 In the Platform column, you can select the target CPU for your application. We recommend that you use Any CPU, the default setting, for optimal portability of your application. Click Close to close the Configuration Manager dialog box. Build the application. Deploy the application to the Web server. Refer to the Microsoft documentation for detailed instructions on using ClickOnce deployment.

103

7 8

DataDirect Connect Series for ADO.NET Reference

104

Chapter 8 Using ClickOnce Deployment

DataDirect Connect Series for ADO.NET Reference

105

A Using an .edmx File


An .edmx file is an XML file that defines an Entity Data Model (EDM), describes the target database schema, and defines the mapping between the EDM and the database. An .edmx file also contains information that is used by the ADO.NET Entity Data Model Designer (Entity Designer) to render a model graphically. This appendix explains the necessary changes to the .edmx file in order to provide Extended Entity Framework functionality to the EDM layer. The Entity Framework includes a set of methods similar to those of ADO.NET. These methods have been tailored to be useful for the new Entity Framework consumers LINQ, EntitySQL and ObjectServices. DataDirect Connect ADO.NET Entity Framework data providers model this functionality in the EDM by surfacing the DDTekConnectionStatistics and DDTekStatus entities. This allows you to quickly model this functionality using the standard tools in Visual Studio. The following code fragment is an example of the SSDL model: <!-SSDL content --> <edmx:StorageModels> <Schema Namespace="DDTek.Store" Alias="Self" Provider="DDTek.Oracle" ProviderManifestToken="11g" xmlns:store="http://schemas.microsoft.com/ado/2007/12/edm/EntityStoreSchemaGenerator" xmlns= "http://schemas.microsoft.com/ado/2006/04/edm/ssdl"> <EntityContainer Name="DDTek_Connection"> <EntitySet Name="Connection_Statistics" EntityType="DDTek.Store.Connection_Statistics" /> <EntitySet Name="Status" EntityType="DDTek.Store.Status" /> </EntityContainer> <Function Name="RetrieveStatistics" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" StoreFunctionName= ""DDTek_Connection_RetrieveStatistics"" /> <Function Name="EnableStatistics" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" StoreFunctionName= ""DDTek_Connection_EnableStatistics"" /> <Function Name="DisableStatistics" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" StoreFunctionName= ""DDTek_Connection_DisableStatistics"" /> <Function Name="ResetStatistics" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" StoreFunctionName= ""DDTek_Connection_ResetStatistics"" /> <!-- <Function Name="Reauthenticate" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" StoreFunctionName= ""DDTek_Connection_Reauthenticate""> --> <!-- <Parameter Name="CurrentUser" Type="varchar2" Mode="In" /> --> <!-- <Parameter Name="CurrentPassword" Type="varchar2" Mode="In" /> --> <!-- <Parameter Name="CurrentUserAffinityTimeout" Type="number" Precision="10" Mode="In" /> --> </Function> --> <EntityType Name="Connection_Statistics"> DataDirect Connect Series for ADO.NET Reference

106

Appendix A Using an .edmx File <Key> <PropertyRef Name="Id" /> </Key> <Property Name="SocketReadTime" Type="binary_double" Nullable="false" /> <Property Name="MaxSocketReadTime" Type="binary_double" Nullable="false" /> <Property Name="SocketReads" Type="number" Precision="20" Nullable="false" /> <Property Name="BytesReceived" Type="number" Precision="20" Nullable="false" /> <Property Name="MaxBytesPerSocketRead" Type="number" Precision="20" Nullable="false" /> <Property Name="SocketWriteTime" Type="binary_double" Nullable="false" /> <Property Name="MaxSocketWriteTime" Type="binary_double" Nullable="false" /> <Property Name="SocketWrites" Type="number" Precision="20" Nullable="false" /> <Property Name="BytesSent" Type="number" Precision="20" Nullable="false" /> <Property Name="MaxBytesPerSocketWrite" Type="number" Precision="20" Nullable="false" /> <Property Name="TimeToDisposeOfUnreadRows" Type="binary_double" Nullable="false" /> <Property Name="SocketReadsToDisposeUnreadRows" Type="number" Precision="20" Nullable="false" /> <Property Name="BytesRecvToDisposeUnreadRows" Type="number" Precision="20" Nullable="false" /> <Property Name="IDUCount" Type="number" Precision="20" Nullable="false" /> <Property Name="SelectCount" Type="number" Precision="20" Nullable="false" /> <Property Name="StoredProcedureCount" Type="number" Precision="20" Nullable="false" /> <Property Name="DDLCount" Type="number" Precision="20" Nullable="false" /> <Property Name="PacketsReceived" Type="number" Precision="20" Nullable="false" /> <Property Name="PacketsSent" Type="number" Precision="20" Nullable="false" /> <Property Name="ServerRoundTrips" Type="number" Precision="20" Nullable="false" /> <Property Name="SelectRowsRead" Type="number" Precision="20" Nullable="false" /> <Property Name="StatementCacheHits" Type="number" Precision="20" Nullable="false" /> <Property Name="StatementCacheMisses" Type="number" Precision="20" Nullable="false" /> <Property Name="StatementCacheReplaces" Type="number" Precision="20" Nullable="false" /> <Property Name="StatementCacheTopHit1" Type="number" Precision="20" Nullable="false" /> <Property Name="StatementCacheTopHit2" Type="number" Precision="20" Nullable="false" /> <Property Name="StatementCacheTopHit3" Type="number" Precision="20" Nullable="false" /> <Property Name="PacketsReceivedPerSocketRead" Type="binary_double" Nullable="false" /> <Property Name="BytesReceivedPerSocketRead" Type="binary_double" Nullable="false" /> <Property Name="PacketsSentPerSocketWrite" Type="binary_double" Nullable="false" /> <Property Name="BytesSentPerSocketWrite" Type="binary_double" Nullable="false" /> <Property Name="PacketsSentPerRoundTrip" Type="binary_double" Nullable="false" /> <Property Name="PacketsReceivedPerRoundTrip" Type="binary_double" Nullable="false" /> <Property Name="BytesSentPerRoundTrip" Type="binary_double" Nullable="false" /> <Property Name="BytesReceivedPerRoundTrip" Type="binary_double" Nullable="false" /> - <!-Oracle specific --> <Property Name="PartialPacketShifts" Type="number" Precision="20" Nullable="false" /> <Property Name="PartialPacketShiftBytes" Type="number" Precision="20" Nullable="false" /> <Property Name="MaxReplyBytes" Type="number" Precision="20" Nullable="false" /> <Property Name="MaxReplyPacketChainCount" Type="number" Precision="20" Nullable="false" /> <Property Name="Id" Type="number" Precision="10" Nullable="false" /> </EntityType> - <EntityType Name="Status"> - <Key> <PropertyRef Name="Id" /> </Key> <Property Name="ServerVersion" Type="varchar2" Nullable="false" /> <Property Name="Host" Type="varchar2" Nullable="false" /> <Property Name="Port" Type="number" Precision="10" Nullable="false" /> <Property Name="SID" Type="varchar2" Nullable="false" />

DataDirect Connect Series for ADO.NET Reference

107 <!-- <Property Name="CurrentUser" Type="varchar2" Nullable="false" /> --> <!-- <Property Name="CurrentUserAffinityTimeout" Type="number" Precision="10" Nullable="false" /> --> <!-- <Property Name="SessionId" Type="number" Precision="10" Nullable="false" /> --> <Property Name="StatisticsEnabled" Type="number" Precision="1" Nullable="false" /> <Property Name="Id" Type="number" Precision="10" Nullable="false" /> </EntityType> </Schema> </edmx:StorageModels> Breaking the model down further, we establish a CSDL model at the conceptual layer this is what is exposed to the EDM. <edmx:ConceptualModels> <Schema Namespace="DDTek" Alias="Self" xmlns="http://schemas.microsoft.com/ado/2006/04/edm"> - <EntityContainer Name="DDTekConnectionContext"> <EntitySet Name="DDTekConnectionStatistics" EntityType="DDTek.DDTekConnectionStatistics" /> <EntitySet Name="DDTekStatus" EntityType="DDTek.DDTekStatus" /> <FunctionImport Name="RetrieveStatistics" EntitySet="DDTekConnectionStatistics" ReturnType= "Collection(DDTek.DDTekConnectionStatistics)" /> <FunctionImport Name="EnableStatistics" EntitySet="DDTekStatus" ReturnType= "Collection(DDTek.DDTekStatus)" /> <FunctionImport Name="DisableStatistics" EntitySet="DDTekStatus" ReturnType= "Collection(DDTek.DDTekStatus)" /> <FunctionImport Name="ResetStatistics" EntitySet="DDTekStatus" ReturnType= "Collection(DDTek.DDTekStatus)" /> - <FunctionImport Name="Reauthenticate" EntitySet="DDTekStatus" ReturnType= "Collection(DDTek.DDTekStatus)"> <Parameter Name="CurrentUser" Type="String" /> <Parameter Name="CurrentPassword" Type="String" /> <Parameter Name="CurrentUserAffinityTimeout" Type="Int32" /> </FunctionImport> </EntityContainer> - <EntityType Name="DDTekConnectionStatistics"> - <Key> <PropertyRef Name="Id" /> </Key> <Property Name="SocketReadTime" Type="Double" Nullable="false" /> <Property Name="MaxSocketReadTime" Type="Double" Nullable="false" /> <Property Name="SocketReads" Type="Int64" Nullable="false" /> <Property Name="BytesReceived" Type="Int64" Nullable="false" /> <Property Name="MaxBytesPerSocketRead" Type="Int64" Nullable="false" /> <Property Name="SocketWriteTime" Type="Double" Nullable="false" /> <Property Name="MaxSocketWriteTime" Type="Double" Nullable="false" /> <Property Name="SocketWrites" Type="Int64" Nullable="false" /> <Property Name="BytesSent" Type="Int64" Nullable="false" /> <Property Name="MaxBytesPerSocketWrite" Type="Int64" Nullable="false" /> <Property Name="TimeToDisposeOfUnreadRows" Type="Double" Nullable="false" /> <Property Name="SocketReadsToDisposeUnreadRows" Type="Int64" Nullable="false" /> <Property Name="BytesRecvToDisposeUnreadRows" Type="Int64" Nullable="false" /> <Property Name="IDUCount" Type="Int64" Nullable="false" /> <Property Name="SelectCount" Type="Int64" Nullable="false" /> <Property Name="StoredProcedureCount" Type="Int64" Nullable="false" /> <Property Name="DDLCount" Type="Int64" Nullable="false" /> <Property Name="PacketsReceived" Type="Int64" Nullable="false" /> <Property Name="PacketsSent" Type="Int64" Nullable="false" />

DataDirect Connect Series for ADO.NET Reference

108

Appendix A Using an .edmx File <Property Name="ServerRoundTrips" Type="Int64" Nullable="false" /> <Property Name="SelectRowsRead" Type="Int64" Nullable="false" /> <Property Name="StatementCacheHits" Type="Int64" Nullable="false" /> <Property Name="StatementCacheMisses" Type="Int64" Nullable="false" /> <Property Name="StatementCacheReplaces" Type="Int64" Nullable="false" /> <Property Name="StatementCacheTopHit1" Type="Int64" Nullable="false" /> <Property Name="StatementCacheTopHit2" Type="Int64" Nullable="false" /> <Property Name="StatementCacheTopHit3" Type="Int64" Nullable="false" /> <Property Name="PacketsReceivedPerSocketRead" Type="Double" Nullable="false" /> <Property Name="BytesReceivedPerSocketRead" Type="Double" Nullable="false" /> <Property Name="PacketsSentPerSocketWrite" Type="Double" Nullable="false" /> <Property Name="BytesSentPerSocketWrite" Type="Double" Nullable="false" /> <Property Name="PacketsSentPerRoundTrip" Type="Double" Nullable="false" /> <Property Name="PacketsReceivedPerRoundTrip" Type="Double" Nullable="false" /> <Property Name="BytesSentPerRoundTrip" Type="Double" Nullable="false" /> <Property Name="BytesReceivedPerRoundTrip" Type="Double" Nullable="false" /> <Property Name="PartialPacketShifts" Type="Int64" Nullable="false" /> <Property Name="PartialPacketShiftBytes" Type="Int64" Nullable="false" /> <Property Name="MaxReplyBytes" Type="Int64" Nullable="false" /> <Property Name="MaxReplyPacketChainCount" Type="Int64" Nullable="false" /> <Property Name="Id" Type="Int32" Nullable="false" /> </EntityType> - <EntityType Name="DDTekStatus"> - <Key> <PropertyRef Name="Id" /> </Key> <Property Name="ServerVersion" Type="String" Nullable="false" /> <Property Name="Host" Type="String" Nullable="false" /> <Property Name="Port" Type="Int32" Nullable="false" /> <Property Name="SID" Type="String" Nullable="false" /> <Property Name="CurrentUser" Type="String" Nullable="false" /> <Property Name="CurrentUserAffinityTimeout" Type="Int32" Nullable="false" /> <Property Name="SessionId" Type="Int32" Nullable="false" /> <Property Name="StatisticsEnabled" Type="Boolean" Nullable="false" /> <Property Name="Id" Type="Int32" Nullable="false" /> </EntityType> </Schema> </edmx:ConceptualModels> The following simple mapping binds the pieces together. <!-C-S mapping content --> - <edmx:Mappings> - <Mapping Space="C-S" xmlns="urn:schemas-microsoft-com:windows:storage:mapping:CS"> - <EntityContainerMapping StorageEntityContainer="DDTek_Connection" CdmEntityContainer= "DDTekConnectionContext"> - <EntitySetMapping Name="DDTekConnectionStatistics"> - <EntityTypeMapping TypeName="DDTek.DDTekConnectionStatistics"> - <MappingFragment StoreEntitySet="Connection_Statistics"> - <!-StoreEntitySet="Connection_Statistics" TypeName="DDTek.DDTekConnectionStatistics"> --> <ScalarProperty Name="SocketReadTime" ColumnName="SocketReadTime" /> <ScalarProperty Name="MaxSocketReadTime" ColumnName="MaxSocketReadTime" />

DataDirect Connect Series for ADO.NET Reference

109 <ScalarProperty Name="SocketReads" ColumnName="SocketReads" /> <ScalarProperty Name="BytesReceived" ColumnName="BytesReceived" /> <ScalarProperty Name="MaxBytesPerSocketRead" ColumnName="MaxBytesPerSocketRead" /> <ScalarProperty Name="SocketWriteTime" ColumnName="SocketWriteTime" /> <ScalarProperty Name="MaxSocketWriteTime" ColumnName="MaxSocketWriteTime" /> <ScalarProperty Name="SocketWrites" ColumnName="SocketWrites" /> <ScalarProperty Name="BytesSent" ColumnName="BytesSent" /> <ScalarProperty Name="MaxBytesPerSocketWrite" ColumnName="MaxBytesPerSocketWrite" /> <ScalarProperty Name="TimeToDisposeOfUnreadRows" ColumnName="TimeToDisposeOfUnreadRows" /> <ScalarProperty Name="SocketReadsToDisposeUnreadRows" ColumnName= "SocketReadsToDisposeUnreadRows" /> <ScalarProperty Name="BytesRecvToDisposeUnreadRows" ColumnName="BytesRecvToDisposeUnreadRows" /> <ScalarProperty Name="IDUCount" ColumnName="IDUCount" /> <ScalarProperty Name="SelectCount" ColumnName="SelectCount" /> <ScalarProperty Name="StoredProcedureCount" ColumnName="StoredProcedureCount" /> <ScalarProperty Name="DDLCount" ColumnName="DDLCount" /> <ScalarProperty Name="PacketsReceived" ColumnName="PacketsReceived" /> <ScalarProperty Name="PacketsSent" ColumnName="PacketsSent" /> <ScalarProperty Name="ServerRoundTrips" ColumnName="ServerRoundTrips" /> <ScalarProperty Name="SelectRowsRead" ColumnName="SelectRowsRead" /> <ScalarProperty Name="StatementCacheHits" ColumnName="StatementCacheHits" /> <ScalarProperty Name="StatementCacheMisses" ColumnName="StatementCacheMisses" /> <ScalarProperty Name="StatementCacheReplaces" ColumnName="StatementCacheReplaces" /> <ScalarProperty Name="StatementCacheTopHit1" ColumnName="StatementCacheTopHit1" /> <ScalarProperty Name="StatementCacheTopHit2" ColumnName="StatementCacheTopHit2" /> <ScalarProperty Name="StatementCacheTopHit3" ColumnName="StatementCacheTopHit3" /> <ScalarProperty Name="PacketsReceivedPerSocketRead" ColumnName="PacketsReceivedPerSocketRead" /> <ScalarProperty Name="BytesReceivedPerSocketRead" ColumnName="BytesReceivedPerSocketRead" /> <ScalarProperty Name="PacketsSentPerSocketWrite" ColumnName="PacketsSentPerSocketWrite" /> <ScalarProperty Name="BytesSentPerSocketWrite" ColumnName="BytesSentPerSocketWrite" /> <ScalarProperty Name="PacketsSentPerRoundTrip" ColumnName="PacketsSentPerRoundTrip" /> <ScalarProperty Name="PacketsReceivedPerRoundTrip" ColumnName="PacketsReceivedPerRoundTrip" /> <ScalarProperty Name="BytesSentPerRoundTrip" ColumnName="BytesSentPerRoundTrip" /> <ScalarProperty Name="BytesReceivedPerRoundTrip" ColumnName="BytesReceivedPerRoundTrip" /> <ScalarProperty Name="PartialPacketShifts" ColumnName="PartialPacketShifts" /> <ScalarProperty Name="PartialPacketShiftBytes" ColumnName="PartialPacketShiftBytes" /> <ScalarProperty Name="MaxReplyBytes" ColumnName="MaxReplyBytes" /> <ScalarProperty Name="MaxReplyPacketChainCount" ColumnName="MaxReplyPacketChainCount" /> <ScalarProperty Name="Id" ColumnName="Id" /> </MappingFragment> </EntityTypeMapping> </EntitySetMapping> - <EntitySetMapping Name="DDTekStatus"> - <EntityTypeMapping TypeName="DDTek.DDTekStatus"> - <MappingFragment StoreEntitySet="Status"> <ScalarProperty Name="ServerVersion" ColumnName="ServerVersion" /> <ScalarProperty Name="Host" ColumnName="Host" /> <ScalarProperty Name="Port" ColumnName="Port" /> <ScalarProperty Name="SID" ColumnName="SID" /> <!-- <ScalarProperty Name="CurrentUser" ColumnName="CurrentUser" /> <!-- <ScalarProperty Name="CurrentUserAffinityTimeout" <!-- ColumnName="CurrentUserAffinityTimeout" /> <!-- <ScalarProperty Name="SessionId" ColumnName="SessionId" /> <ScalarProperty Name="StatisticsEnabled" ColumnName="StatisticsEnabled" /> <ScalarProperty Name="Id" ColumnName="Id" />

DataDirect Connect Series for ADO.NET Reference

110

Appendix A Using an .edmx File </MappingFragment> </EntityTypeMapping> </EntitySetMapping> <FunctionImportMapping FunctionImportName="RetrieveStatistics" FunctionName= "DDTek.Store.RetrieveStatistics" /> <FunctionImportMapping FunctionImportName="EnableStatistics" FunctionName= "DDTek.Store.EnableStatistics" /> <FunctionImportMapping FunctionImportName="DisableStatistics" FunctionName= "DDTek.Store.DisableStatistics" /> <FunctionImportMapping FunctionImportName="ResetStatistics" FunctionName= "DDTek.Store.ResetStatistics" /> <FunctionImportMapping FunctionImportName="Reauthenticate" FunctionName= "DDTek.Store.Reauthenticate" /> </EntityContainerMapping> </Mapping> </edmx:Mappings>

DataDirect Connect Series for ADO.NET Reference

111

B Using Enterprise Library 4.1


Using the Microsoft Enterprise Libraries can simplify application development by wrapping common tasks, such as data access, into portable code that makes it easier to move your application from one DBMS to another. DataDirect Connect for ADO.NET data providers can be used with Data Access Application Blocks (DAAB). Applications that use the standard Logging Application Block and design patterns can quickly display the SQL that is generated as part of the DataDirect Connect for ADO.NET data providers that support the Microsoft ADO.NET Entity Framework. To use features of the Enterprise Library 4.1 with your data provider, download Microsoft Enterprise Library 4.1 (October 2008) from the Microsoft Web site. The Enterprise Library 4.1 installation by default includes the Enterprise Library documentation, which contains detailed information about using the application blocks.

Data Access Application Block Overview


The Data Access Application Block (DAAB) is designed to allow developers to replace ADO.NET boiler-plate code with standardized code for everyday database tasks. The overloaded methods in the Database class can:

Return Scalar values. Determine which parameters that are needed and create them. Involve commands in a transaction.

If your application needs to address specific DBMS functionality, you can use a DataDirect Connect for ADO.NET data provider.

Configuring the DAAB


Before you can configure the DAAB for use with your application, you must set up the environment: 1 2 3 4 First, make sure that you have installed Microsoft Enterprise Library 4.1 (October 2008). Then, compile the project and note the output directory. Open the DataDirect DAAB project for your DataDirect data provider, located in install_dir\Enterprise Libraries\Src\CS\. Compile the project and note the output directory.

Configuring the Data Access Application Block consists of two procedures:


"Adding a New DAAB Entry" on page 112 "Adding the Data Access Application Block to Your Application" on page 113

DataDirect Connect Series for ADO.NET Reference

112

Appendix B Using Enterprise Library 4.1

Adding a New DAAB Entry


Now, use the Enterprise Library Configuration Tool to add a new DAAB entry: 1 2 Right-click Enterprise Library Configuration, and select New Application. Right-click Application Configuration, then select New / Data Access Application Block. The Enterprise Library Configuration window appears.

3 4

In the Name field, enter a name for the DAABs connection string, for example, MyOracle. In the ConnectionString field, enter a connection string. For example, Host=ntsl2003;Port=1521;SID=ORCL1252; User ID=SCOTT;Password= TIGER;Encryption Method=SSL;AuthenticationMethod=Kerberos;

Right-click the ProviderName field, and select the data provider. For example, select DDTek.Oracle.4.0 for the Oracle data provider.

DataDirect Connect Series for ADO.NET Reference

Data Access Application Block Overview 6 Right-click Custom Provider Mappings and select New / Provider Mappings.

113

7 8

In the Name field, select the data provider name you specified in Step 5. Select the TypeName field, and then choose the browse () button to navigate to the Debug output directory of the DataDirect DAAB that you built. Then, select the TypeName. For example, the Oracle TypeName is DDTek.EnterpriseLibrary.Data.Oracle.dll. Leave the Enterprise Library Configuration window open for now and do not save this configuration until you complete the following section.

Adding the Data Access Application Block to Your Application


To add the DAAB to a new or existing application, perform these steps: 1 Add two additional References to your Visual Studio solution:

Enterprise Library Shared Library Enterprise Library Data Access Application Block

Add the following directive to your C# source code: using Microsoft.Practices.EnterpriseLibrary.Data; using System.Data;

3 4

Rebuild the solution to ensure that the new dependencies are functional. Determine the output Debug or Release path location of your current solution, and switch back to the Enterprise Library Configuration window (see "Adding a New DAAB Entry" on page 112). Right-click the connection string under the Application Node and select Save Application.

DataDirect Connect Series for ADO.NET Reference

114

Appendix B Using Enterprise Library 4.1 6 7 8 9 Navigate to the Debug or Release output directories of your current solution, and locate the .exe file of the current solution. Click the file name once, and add .config to the name, for example, MyOracle.config. Ensure that Save as type 'All Files' is selected, and select Save. Using File Explorer, copy the DDTek.EnterpriseLibrary.Data.XXX.dll from the DataDirect DAAB directories, where XXX indicates the data source.

10 Place the copy of this DLL into either the Debug or Release output directory of your current solution.

Using the Data Access Application Block in Application Code


Now that you have configured the DAAB, you can build applications on top of this DAAB. In the following example, we use the DAAB MyOracle and the DatabaseFactory to generate an instance of a Database object backed by an Oracle data source. using using using using using System; System.Collections.Generic; System.Text; Microsoft.Practices.EnterpriseLibrary.Data; System.Data;

namespace DAAB_Test_App_1 { class Program { static void Main(string[] args) { Database database = DatabaseFactory.CreateDatabase("MyOracle"); DataSet ds = database.ExecuteDataSet(CommandType.TableDirect, "SQLCOMMANDTEST_NC_2003SERVER_1"); } } } The Microsoft Enterprise Library DAAB coding patterns are now at your disposal.

DataDirect Connect Series for ADO.NET Reference

Logging Application Blocks

115

Logging Application Blocks


Using the Enterprise Library Logging Application Block (LAB) makes it easier to implement common logging functions. DataDirect Connect data providers that support the ADO.NET Entity Framework use the standard Logging Application Block and design patterns, and offer LAB customizations for additional functionality. To use features of the Enterprise Library with your data provider, download Microsoft Enterprise Library from http://www.codeplex.com/entlib. The Enterprise Library installation by default includes the Enterprise Library documentation, which contains detailed information about using the application blocks.

When Should You Use the LAB?


The DataDirect ADO.NET Entity Framework data providers include a set of LAB customizations that are useful for developing with the ADO.NET Entity Framework when you want to log the Command Trees and SQL generated when using the data provider.

Configuring the LAB


A logging capability can be added to an application by adding an entry to an applications configuration file (either app.config or web.config) using the Enterprise Library configuration tool. This tool contains specific instructions in order to enable the Logging Application Block config file. The tool also contains the necessary AppSetting to enable the LAB. To enable the Logging Application Block output, set the environment property DDTek_Enable_Logging_Application_Block_Trace to true. Alternatively, in the app.config file, set the AppSetting property DDTek.EnableLoggingApplicationBlock to true. The following C# code snippet shows the loggingConfiguration property of the app.config file. <loggingConfiguration name="Logging Application Block" tracingEnabled="true" defaultCategory="General" logWarningsWhenNoCategoriesMatch="true"> Setting either of these properties to false disables the logging block. If enabled, the data provider establishes a new LogEntry entry instance for each SQL statement generated by the ADO.NET Entity Framework canonical query tree. The SQL logged to the Logging Block is the SQL that is ultimately transmitted to the data source.

DataDirect Connect Series for ADO.NET Reference

116

Appendix B Using Enterprise Library 4.1 To configure the Logging Application Block: 1 Select Start / Programs / Microsoft patterns and practices / Enterprise Library 4.1 October 2008 / Enterprise Library Configuration. The Enterprise Library Configuration window appears.

Select File / New Application.

DataDirect Connect Series for ADO.NET Reference

Logging Application Blocks 3 Right-click the Application Configuration node and select New / Logging Application Block.

117

4 5

Right-click Category Sources, and select New / Category. In the Name pane, select Name. Type the name of the new category, and then press ENTER. In the following example, the category name DDTek Error will be created.

DataDirect Connect Series for ADO.NET Reference

118

Appendix B Using Enterprise Library 4.1 6 7 From the SourceLevels drop-down list, set the logging level for the new category. By default, all logging levels are enabled. Right-click the new category and select New / TraceListener Reference. A Formatted EventLog TraceListener node is added. From the ReferencedTraceListener drop-down list, select Formatted EventLog TraceListener. Repeat Step 4 through Step 7 to create the following categories:

DDTek Information: Information not related to errors DDTek Command: Enables SQL, Parameter, and DbCommandTree logging

Select File / Save Application. The Save As window appears. Type a name for your configuration file. By default, the file is saved to C:\Program Files\Microsoft Enterprise Library October 2008\Bin\filename.exe.config, where filename is the name that you typed in the Save As window.

Adding a New Logging Application Block Entry


Now, use the Enterprise Library Configuration Tool to add a new Logging Application Block entry: 1 2 Select Start / Programs / Microsoft patterns and practices / Enterprise Library 4.1 October 2008 / Enterprise Library Configuration, and select File / New Application. Right-click Application Configuration, then select New / Logging Application Block. The Configuration section appears in the right pane.

3 4

In the TracingEnabled field, type True. Save the Logging application block.

DataDirect Connect Series for ADO.NET Reference

Logging Application Blocks

119

Using the LAB in Application Code


The LAB that you configured must be added to the app.config or web.config file for your application. The following settings can be used to enable and configure the data provider's interaction with the LAB:

EnableLoggingApplicationBlock: Enables the Logging Application Block. LABAssemblyName: Specifies the assembly name to which the Logging Application Block applies. NOTE: If you are using any version of the LAB other than the Microsoft Enterprise Library 4.1 (October 2008) binary release, you must set the LABAssemblyName. For example, if you are using an older or newer version of the LAB, or a version that you have customized, you must specify a value for LABAssemblyName.

LABLoggerTypeName: Specifies the type name for the Logging Application Block. LABLogEntryTypeName: Specifies the type name for the LogEntry object.

DataDirect Connect Series for ADO.NET Reference

120

Appendix B Using Enterprise Library 4.1

DataDirect Connect Series for ADO.NET Reference

121

Glossary
.NET Framework Microsoft defines Microsoft .NET as a set of Microsoft software technologies for connecting information, people, systems, and devices. To optimize software integration, the .NET Framework uses small, discrete, building-block applications called Web services that connect to each other as well as to other, larger applications over the Internet. The .NET Framework has two key parts:

ASP.NET is an environment for building smart client applications (Windows Forms), and a loosely-coupled data access subsystem (ADO.NET). The common language runtime (CLR) is the core runtime engine for executing applications in the .NET Framework. You can think of the CLR as a safe areaa sandbox inside of which your .NET code runs. Code that runs in the CLR is called managed code.

ADO.NET

The data access component for the .NET Framework. ADO.NET is made of a set of classes that are used for connecting to a database; providing access to relational data, XML, and application data; and retrieving results. An object-relational mapping (ORM) framework for the .NET Framework. Developers can use it to create data access applications by programming against a conceptual application model instead of programming directly against a relational storage schema. This model allows developers to decrease the amount of code that must be written and maintained in data-centric applications. A compiled representation of one or more classes. Each assembly is self-contained, that is, the assembly includes the metadata about the assembly as a whole. Assemblies can be private or shared.

ADO.NET Entity Framework

assembly

Private assemblies, which are used by a limited number of applications, are placed in the application folder or one of its subfolders. For example, even if the client has two different applications that call a private assembly named formulas, each client application loads the correct assembly. Shared assemblies, which are available to multiple client applications, are placed in the Global Assembly Cache (GAC). Each shared assembly is assigned a strong name to handle name and version conflicts.

assembly cache

A machine-wide code cache that is used for storing assemblies side-by-side. The cache is in two parts. The global assembly cache contains assemblies that are explicitly installed to be shared among many applications on the computer. The download cache stores code that is downloaded from Internet or intranet sites, which is isolated to the application that downloaded the code. The process of identifying a user, typically based on a user ID and password. Authentication ensures that users are who they claim to be. See also client authentication, NTLM authentication, OS authentication, and user ID/password authentication.

authentication

DataDirect Connect Series for ADO.NET Reference

122

Glossary bulk load A method of inserting large amounts of data into a database table. Rows are sent from the database client to the database server in a continuous stream. The database server can optimize how rows are inserted. Also known as bulk copy. The process of identifying the user ID and password of the user logged onto the system on which the driver is running to authenticate the user to the database. The database server depends on the client to authenticate the user and does not provide additional authentication. See also authentication. A feature of the .NET Framework 2.0 that lets clients download the assemblies they need from a remote web server. The first time the assembly is referenced, it is downloaded to a cache on the client and executed. After that, when a client accesses the application, the application checks the server to find out whether any assemblies have been updated. Any new assemblies are downloaded to the download cache on the client, refreshing the application without any interaction with the end user. A mechanism that distributes new connections in a computing environment so that no given server is overwhelmed with connection requests. A mechanism that is provided by the common language runtime through which managed code is granted permissions by a security policy; permissions are enforced, limiting the operations that the code will be allowed to perform. A development concept that is focused around defining your model using C#/Visual Basic .NET classes. These classes can then be mapped to an existing database or be used to generate a database schema. Additional configuration can be supplied using Data Annotations or via a fluent API. A set of similarly typed objects that are grouped together. For example, data collections include hash tables, queues, stacks, dictionaries, and lists. Some collections have specialized functions, such as the MetaDataCollection collections. The common language runtime (CLR) is the core runtime engine in the Microsoft .NET Framework. The CLR supplies services such as cross-language integration, code access security, object lifetime management, and debugging support. Applications that run in the CLR are sometimes said to be running "in the sandbox." A mechanism that allows an application to connect to an alternate, or backup, database server if the primary database server is unavailable, for example, because of a hardware failure or traffic overload. Connection retry defines the number of times the data provider attempts to connect to the primary and, if configured, alternate database servers after the initial unsuccessful connection attempt. Connection retry can be an important strategy for system recovery. The process by which connections can be reused rather than creating a new one every time the data provider needs to establish a connection to the underlying database. A pre-defined code block that provides access to the most often used ADO.NET data access features. Applications can use the application block to pass data through application layers, and submit changed data back to the database.

client authentication

ClickOnce Deployment

client load balancing code access security (CAS)

Code First model

collection

common language runtime (CLR)

connection failover

connection retry

connection pooling Data Access Application Block (DAAB)

DataDirect Connect Series for ADO.NET Reference

123 data provider An ADO.NET data provider communicates with the application and database and performs tasks such as establishing a connection to a database, executing commands, and returning results to the application. In a DataDirect Bulk Load operation, the table on the database server into which the data is copied. A pre-defined code block that provides access to the most often used ADO.NET data access features. Applications can use the application block to pass data through application layers, and submit changed data back to the database. The part of the assembly cache that stores assemblies that are specifically installed to be shared by many applications on the computer. Applications deployed in the Global Assembly Cache (GAC) must have a strong name to handle name and version conflicts. A particular locking strategy that is employed in the database system to improve data consistency. The higher the isolation level number, the more complex the locking strategy behind it. The isolation level provided by the database determines how a transaction handles data consistency. The American National Standards Institute (ANSI) defines four isolation levels:

destination table entity

Global Assembly Cache (GAC)

isolation level

Read uncommitted (0) Read committed (1) Repeatable read (2) Serializable (3)

Kerberos authentication load balancing locking level

An OS authentication protocol that provides authentication using secret key cryptography. See also authentication and OS authentication. See client load balancing. A database operation that restricts a user from accessing a table or record. Locking is used in situations when more than one user might try to use the same table at the same time. By locking the table or record, the system ensures that only one user at a time can affect the data. A component of the Microsoft Enterprise Libraries that simplifies the implementation of common logging functions. Developers can use the Logging Block to write information to a variety of locations, such as the event log, an e-mail message, or a database. Code that is executed and managed by the .NET Framework, specifically by the CLR. Managed code must supply the information necessary for the CLR to provide services such as memory management and code access security. Information about data, for example, a database schema that describes the fields, columns, and formats used in a database. Different database schema elements are exposed through schema collections. A development concept that is focused on the ability to start with a conceptual model and create the database from it. Additional configuration can be supplied using Data Annotations or through a fluent API.

Logging Application Block (LAB) managed code

metadata

Model First model

DataDirect Connect Series for ADO.NET Reference

124

Glossary namespace A logical naming scheme for grouping related types. The .NET Framework uses a hierarchical naming scheme for grouping types into logical categories of related functionality, such as the ASP.NET technology or remoting functionality. Design tools can use namespaces to make it easier for developers to browse and reference types in their code. A single assembly can contain types whose hierarchical names have different namespace roots, and a logical namespace root can span multiple assemblies. In the .NET Framework, a namespace is a logical design-time naming convenience, whereas an assembly establishes the name scope for types at run time. A feature of the .NET Framework 1.x that lets clients download the assemblies they need from a remote web server. The first time the assembly is referenced, it is downloaded to a cache on the client and executed. After that, when a client accesses the application, the application checks the server to find out whether any assemblies have been updated. Any new assemblies are downloaded to the download cache on the client, refreshing the application without any interaction with the end user. A network authentication protocol that provides a challenge response security mechanism for connections between Windows clients and servers, confirming the users identification to a network service. It is used in later versions of Windows for backward compatibility. See authentication and OS authentication. An authentication process that can take advantage of the user name and password that is maintained by the operating system to authenticate users to the database or use another set of user credentials specified by the application. By allowing the database to share the user name and password that is used for the operating system, users with a valid operating system account can log into the database without supplying a user name and password. See also authentication, Kerberos authentication, and NTLM authentication. A tool in the Windows SDK that identifies areas in which performance problems exist. A component that is built into DataDirect Connect for ADO.NET and accessible through Visual Studio that leads you through a series of questions about your application. Based on your answers, the Wizard provides the optimal settings for DataDirect Connect for ADO.NET connection string options that affect performance. Optionally, you can generate a new application that is pre-configured with a connection string that is optimized for your environment. Closely related schemas that can be handled more efficiently when grouped together. Database schema elements such as tables and columns are exposed through schema collections. An industry-standard protocol for sending encrypted data over database connections. SSL secures the integrity of your data by encrypting information and providing SSL client/SSL server authentication. An abstraction of a sequence of binary or text data. The Stream class and its derived classes provide a generic view of these different types of input and output. A name that consists of an assembly's text name, version number, and culture information (if provided), with a public key and a digital signature generated over the assembly. Assemblies with the same strong name must be identical.

No-Touch Deployment

NTLM authentication

OS authentication

Performance Monitor Performance Tuning Wizard

schema collection

Secure Socket Layer (SSL)

stream strong name

DataDirect Connect Series for ADO.NET Reference

125 unmanaged code Code that is executed directly by the operating system, outside of the CLR. Unmanaged code includes all code written before the .NET Framework was introduced. Because it is outside the .NET environment, unmanaged code cannot make use of any .NET managed facilities such as memory management and code access security. Authentication process that authenticates the user to the database using a database user name and password. See also authentication.

user ID/password authentication

DataDirect Connect Series for ADO.NET Reference

126

Glossary

DataDirect Connect Series for ADO.NET Reference

127

Index
Symbols
.edmx file 105 .NET ClickOnce Deployment 101 designing applications for performance 89 getting schema information 61

C
CATALOGS schema collection 68 ClickOnce Deployment 101 client information about 85 how databases store 85 location used for storing 86 storing 86 code example pseudo stored procedures 39 using

A
adding DAAB to your application 52, 113 new DAAB entry 49, 112 new Logging Application block entry 118 ADO.NET Entity Framework Code First Model 25 configuring the data providers designing an Entity Data Model 41 overview 40

DAAB in application code 53, 114 Logging Application Block in your application 58, 119
Code First support 25 using 38 code portability, increasing 47 Columns schema collection 69 CommandBuilder class, impact on performance 91 commands retrieving little or no data 98 using multiple times 94 compiled help file 10 configuring Data Access Application Block (DAAB) 48, 111 Logging Application Block 54, 115 connecting improving performance 91 start transaction after 93 connection statistics, implementing in Entity Framework application 42 contacting Customer Support 11 controlling the size of the Entity Data Model 41 conventions, typographical 8 creating a model 26, 31 Customer Support, contacting 11

specifying Enterprise Library version 41 specifying version information 40


creating a model 26, 31 enhancing performance 42 implementing Kerberos authentication 43, 93 mapping EDM canonical functions 44 Model First support 25 obtaining connection statistics 42 optimal model size 41 using an .edmx file 105 overview 25

pseudo stored procedures to provide functionality 39


using Code First 38 attributes, DB2 Workload Manager (WLM) 87

B
books HTML version 9 PDF version 10 bridge, performance impact of using 96

D
Data Access Application Block (DAAB) adding a new DAAB entry 49, 112 adding to your application 52, 113 additional resources 60 configuring 48, 111 overview 47, 111 using in application code 53, 114 to increase code portability 47 when to use 47 data types, choosing to improve performance 99

DataDirect Connect Series for ADO.NET Reference

128

Index
Database First model 26 DataReader class, choosing when to use 96 DataSet choosing when to use 96 effect on performance 98 keeping result sets small 96 DataSourceInformation schema collection ColumnNames for Oracle data provider 65 ColumnNames supported 64 date, time, timestamp literal escape sequences 13 DB2 data provider outer join escape syntax 18 scalar functions supported 14 with ADO.NET Entity Framework 26 Workload Manager (WLM) 85 DB2 Workload Manager (WLM) attributes 87 deploying .NET applications ClickOnce Deployment 101 Windows Forms applications 101 with an ADO.NET data provider 101 designing .NET applications See performance optimization documentation, about 9

H
help file 10

I
IDBCommand 19 implementing Kerberos authentication in Entity Framework 43, 93 Indexes schema collection 73 interoperability using SQL extension escapes 19 using the GenericDatabase class option for DAAB implementation 48 iSeries and AS/400, scalar functions supported 14 isolation levels data consistency behavior compared 23 dirty reads 22 non-repeatable reads 22 phantom reads 22 data currency 23 description read committed 22 read uncommitted 22 repeatable read 22 serializable 22

E
Enterprise Library, using with the data providers version 4.1 41, 111 version 5.0 47 Entity Data Model (EDM) 105 Entity Framework See ADO.NET Entity Framework escape sequences, outer join 18 escapes date and time 13 RowSetSize property used in 19 SQL extension 19 stored procedure 18 example .edmx file 105 reauthentication 43, 93 using the LAB in application code 58, 119 ExecuteScalar and ExecuteNonQuery, performance implications 98

J
joins left outer 19 nested outer 19 outer join escape sequence 18 right outer 19

K
Kerberos authentication implementing in Entity Framework application 43, 93 using with reauthentication 43, 93

F
fetching random data 96 functions supported 14

L
left outer joins 19 license file 101 limiting the size of the result set, SQL extension escape 19 literals, escape sequence 13 location used for storing client information for a connection 86 locking modes and levels 21 overview 21

G
GenericDatabase class option for DAAB implementation 48 glossary 121

DataDirect Connect Series for ADO.NET Reference

Index
Logging Application Block (LAB) adding a new LAB entry 118 configuring 54, 115 using in application code 58, 119 when to use 54, 115 long data, performance impact of retrieving 97 managing connections 91 simplifyng automatically-generated SQL queries 90 size of data retrieved 98 turning off autocommit 93 using disconnected DataSet 96 native managed providers 96 POCO entities 25 prepared statements caching 94 performance 94 PrimaryKeys schema collection 76 ProcedureParameters schema collection 77 Procedures schema collection 79 pseudo stored procedures, using to provide functionality 39

129

M
managed code, performance advantages 96 MetaDataCollections schema collection 63 Microsoft Enterprise Library Application Blocks 47 Model First support 25, 44 using 31

R N
native managed providers, performance advantages 96 nested outer joins 19 numeric functions Oracle data provider 16 SQL Server data provider 17 Sybase data provider 17 random data, fetching 96 reauthentication example 43, 93 specifying support 64 ReservedWords schema collection 67 result set, impact of size on scalability 96 retrieving long data 97 right outer joins 19 RowsetSize property 19

O
obtaining connection statistics in Entity Framework 42 online books, installing 9 Oracle data provider outer join escape syntax 18 provider-specific ColumnNames supported 65 scalar functions supported 16 with ADO.NET Entity Framework 26 Oracle Entity Framework data provider using Code First 38 using Model First 31 outer join escape sequence 18

S
scalar functions DB2 data provider 14 Oracle data provider 16 overview 14 SQL Server data provider 17 Sybase data provider 17 schema collection CATALOGS 68 Columns 69 DataSourceInformation 64 Indexes 73 MetaDataCollections 63 PrimaryKeys 76 ProcedureParameters 77 Procedures 79 ReservedWords 67 Schemata 80 TablePrivileges 82 TablesSchemas 81 Views 83 Schemata schema collection 80 simplifyng automatically-generated SQL queries 90

P
page-level locking 21 parameter markers in stored procedures 95 performance optimization avoiding distributed transactions 94 use of CommandBuilder objects 91 choosing

between a DataReader and a DataSet 96 data types 99


Entity Framework 42 general 89

DataDirect Connect Series for ADO.NET Reference

130

Index
SQL escape sequences date, time, timestamp 13 extension 19 general 13 outer join 18 RowSetSize property 19 scalar functions 14 support for 13 SQL leveling 47 SQL queries generated by Visual Studio wizards 90 SQL Server data provider outer join escape syntax 18 scalar functions supported 17 statement caching 94 stored procedures escapes 18 performance implications 94 using parameter markers as arguments 95 storing client information 86 string functions DB2 data provider 14 Oracle data provider 16 SQL Server data provider 17 Sybase data provider 17 Sybase data provider outer join escape syntax 18 scalar functions supported 17

W
Workload Manager (WLM) 85

X
Xml Describe Type connection string option 61, 62 XML, manipulating relational data as 96

T
TablePrivileges schema collection 82 TablesSchemas schema collection 81 time literal escape sequence 13 Timedate functions Oracle data provider 16 SQL Server data provider 17 Sybase data provider 17 timestamp literal escape sequence 13 transactions managing commits 93 performance considerations of using distributed 94 typographical conventions 8

U
unmanaged code, performance impact 96 using Command.Prepare 94 DAAB in application code 53, 114 LAB in application code 58, 119 schema metadata 63

V
Views schema collection 83

DataDirect Connect Series for ADO.NET Reference

Das könnte Ihnen auch gefallen