Sie sind auf Seite 1von 445

[ Team LiB ]

Table of Contents

Database Access with Visual Basic .NET, Third


Edition
By Jeffrey P. McManus, Jackie Goldstein
Publisher

: Addison Wesley

Pub Date

: February 14, 2003

ISBN

: 0-672-32343-5

Pages

: 464

Database Access with Visual Basic .NET continues to use techniques developed by Jeffrey McManus that
provide solutions to problems faced by developers every day. Since data access is the most used feature in
corporate development, it is important for developers to understand the most effective and efficient way to
access data using .NET technologies. This book provides clear explanations of how to use ADO.NET to access
data stored in relational databases, as well as how XML integrates with ADO.NET. The authors use their
years of experience to relate key topics to real-world applications through use of Business Cases that include
code listings in Visual Basic .NET.

[ Team LiB ]

[ Team LiB ]

Table of Contents

Database Access with Visual Basic .NET, Third


Edition
By Jeffrey P. McManus, Jackie Goldstein
Publisher

: Addison Wesley

Pub Date

: February 14, 2003

ISBN

: 0-672-32343-5

Pages

: 464

Copyright
Preface
Who This Book Is For
How This Book Is Organized
The Software Environment
Keeping in Touch
About the Authors
About the Contributor
About the Technical Reviewers
Acknowledgments
Chapter 1. Database Basics
What Is a Database?
Business Cases
Tables and Fields
Manipulating Data with Objects
Data Types
Creating a Database Schema
Relationships
Normalization
Creating a User Interface in a Windows Forms Application

Summary
Chapter 2. Structured Query Language Queries and Commands
What Is a Query?
Testing Queries with the Server Explorer
Retrieving Records with the SELECT Clause
Designating a Record Source with the FROM Clause
Specifying Criteria with the WHERE Clause
Sorting Results with ORDER BY
Displaying the Top or Bottom of a Range with TOP
Joining Related Tables in a Query
Performing Calculations in Queries
Aliasing Field Names with AS
Queries That Group and Summarize Data
Union Queries
Subqueries
Manipulating Data with SQL
Using Data Definition Language
Summary
Chapter 3. Getting Started with SQL Server 2000
Setting Up and Running Microsoft SQL Server 2000
Getting Started with SQL Server 2000: The Basics
Summary
Chapter 4. ADO.NETData Providers
Overview of ADO.NET
Overview of .NET Data Provider Objects
The Connection Object
The Command Object
The DataReader Object
Using the Connection and Command Design-Time Components
Other Data Provider Objects
Summary
Chapter 5. ADO.NETThe DataSet
Applications and Components of the DataSet
Populating and Manipulating the DataSet
Using the DataSet Component
Summary
Chapter 6. ADO.NETThe DataAdapter
Populating a DataSet from a Data Source
Updating the Data Source
Summary
Chapter 7. ADO.NETAdditional Features and Techniques

Detecting Concurrency Conflicts


Table and Column Mappings

DataViews
Strongly Typed DataSets
Summary
Chapter 8. Visual Studio.NET Database Projects
Creating a Database Project
Database References
Scripts
Queries
Summary
Chapter 9. XML and .NET
An Overview of XML
XML Classes in .NET
Extending SQL Server with SQLXML 3.0 and IIS
Using XML, XSLT, and SQLXML to Create a Report
Summary
Chapter 10. ADO.NET and XML
Basic Reading and Writing of XML
Creating an XmlReader from a Command Object
The XmlDataDocument Object
Summary
Chapter 11. WebForms: Database Applications with ASP.NET
An Overview of ASP.NET
Accessing a Database Through ASP.NET
Improving the Performance of ASP.NET Database Applications Through Stored Procedures
Summary
Chapter 12. Web Services and Middle-Tier Technologies
Using the Middle Tier to Provide Presentation Logic
Using Data in the Middle Tier
Exposing Objects Through Web Services
Putting It All Together
Summary

[ Team LiB ]

[ Team LiB ]

Copyright
Many of the designations used by manufacturers and sellers to distinguish their products are claimed as
trademarks. Where those designations appear in this book, and Addison-Wesley was aware of a trademark
claim, the designations have been printed with initial capital letters or in all capitals.
The authors and publisher have taken care in the preparation of this book, but make no expressed or implied
warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for
incidental or consequential damages in connection with or arising out of the use of the information or
programs contained herein.
The publisher offers discounts on this book when ordered in quantity for bulk purchases and special sales.
For more information, please contact:
U.S. Corporate and Government Sales
(800) 382-3419
corpsales@pearsontechgroup.com
For sales outside of the U.S., please contact:
International Sales
(317) 581-3793
international@pearsontechgroup.com
Visit Addison-Wesley on the Web: www.awprofessional.com
Library of Congress Cataloging-in-Publication Data
McManus, Jeffrey P.
Database access with Visual Basic .Net / Jeffrey P. McManus and Jackie Goldstein ;
Kevin T. Price, contributor.3rd ed.
p. cm.
ISBN 0-672-32343-5 (alk. paper)
1. Microsoft Visual BASIC. 2. BASIC (Computer program language) 3. Microsoft .NET. I. Goldstein, Jackie. II.
Price, Kevin T. III. Title.
QA76.73.B3M3988 2003
005.2'768dc21
2002043755

Copyright 2003 by Pearson Education, Inc.


All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or
transmitted, in any form, or by any means, electronic, mechanical, photocopying, recording, or otherwise,
without the prior consent of the publisher. Printed in the United States of America. Published simultaneously
in Canada.
For information on obtaining permission for use of material from this work, please submit a written request
to:
Pearson Education, Inc.
Rights and Contracts Department
75 Arlington Street, Suite 300
Boston, MA 02116
Fax: (617) 848-7047
Text printed on recycled paper
1 2 3 4 5 6 7 8 9 10MA0706050403
First printing, February, 2003

Dedication

This book is dedicated to my parents,


Ann and Mike Goldstein,
who always encouraged and supported me
throughout my education
and professional career.
Jackie Goldstein, October 2002
For my wife.
Kevin Price, October 2002

[ Team LiB ]

[ Team LiB ]

Preface
The purpose of this book is to show you how to develop database applications, using Visual Basic.NET and
ADO.NET.
Although both the OLEDB and the ODBC Data Providers for .NET are discussed (Chapter 4), almost all the
demonstrations and examples in this book use the data provider for Microsoft SQL Server. It is readily
available and used by most of our readers. Moreover, applying the concepts and making the required code
modifications for other data sources are normally very straightforward. We point out where there are
significant differences.
Even though we expect that most readers will be working with SQL Server, we do not assume that they are
experienced with it. You may be new to database applications or only have experience with other databases
such as Microsoft Access or Oracle. We have therefore included a hefty overview of Microsoft SQL Server in
Chapter 3. If you are experienced with SQL Server, you may want to skip this chapter. However, it covers a
wide variety of topics, and you may still find a "nugget" or two that would make reading the chapter
worthwhile.
Along with the coverage of SQL Server as a data source, we have included coverage of XML and XML
integration with ADO.NET. This aspect of developing database applications with Visual Basic.NET is crucial
and is often overlooked or short-changed. However, because XML is so important to developing modern
data-driven applications we have provided rather extensive coverage of this topic.
This book contains a lot of demonstrations, examples, and code. We believe that their use is the best way to
help you understand the concepts being presented. We normally provide relatively simple examples to
demonstrate the concepts and then present Business Cases to put the concepts into a real-world context.

[ Team LiB ]

[ Team LiB ]

Who This Book Is For


We assume that you are already familiar with Visual Basic.NET (VB.NET). Although we give step-by-step
coding instructions and code examples, we do not cover VB.NET syntax in this book. We assume that you are
reasonably comfortable with VB.NET and Visual Studio.NET (VS.NET) and do not waste your time reviewing
fundamental concepts. For example, we do not normally step through basic tasks, such as opening and
saving project files, except maybe the first time you encounter them. We do, however, often show how to do
the same thing in several different wayswe hope that this will extend your knowledge of VB.NET, without
rehashing the basics. You may also notice variant coding styles, and even screen shots on different versions
of Windowsall illustrating the flexibility of VB.NET.
Most of the examples in this book are presented with Windows Application (Windows Forms) as the project
type. The reason is that nearly all Visual Basic programmers are most familiar and comfortable with this type
of application. This approach allows us to focus on database access, rather than on the issues involved in the
different types of .NET projects. Still, in later chapters, we do discuss and show ASP.NET Web Applications
and Web Services, providing database access examples for these technologies and project types.

[ Team LiB ]

[ Team LiB ]

How This Book Is Organized


This book can be thought of as containing three parts. The first part, consisting of Chapters 13, comprises
the preliminaries. The coverage of database basics, SQL, and SQL Server is meant to provide the
fundamentals required for the novice to proceed comfortably throughout the rest of the book. At the same
time, these chapters provide a good review of these topics even for someone who is experienced is these
areas.
The second part can be thought of as the core of the book. Chapters 47 provide in-depth explanations and
numerous examples of the major ADO.NET objects and the use of their properties and methods. Chapter 7
goes beyond the basics to explore advanced features and techniques of the ADO.NET objects.
The third part of the book shows how the ADO.NET technologies and techniques previously presented can be
used to build real-world applications. This part includes the use of Visual Studio Database Projects for
managing SQL scripts in Chapter 8, a discussion of XML in Chapter 9, and the integration of XML and
ADO.NET in Chapter 10. Finally, we present additional types of applications that utilize ADO.NET: Chapter 11
covers ASP.NET Web Applications and Chapter 12 covers Web Services and middle-tier objects.

[ Team LiB ]

[ Team LiB ]

The Software Environment


We assume that you have already installed, or are capable of installing, Visual Studio.NET. The only thing to
note regarding its use is that there are significant differences between available capabilities and behaviors of
the visual database tools, depending on the edition of Visual Studio and the type of database that you use.
Some of these differences are as follows.
Visual Studio Edition Available Features
Standard

View tables and data, and execute stored procedures for SQL Server Desktop Engine
and Access (MDB) databases.

Professional

View tables and data, and execute stored procedures for any database that has an
OLEDB provider or ODBC driver.
Design (create/modify) tables and views for SQL Server Desktop Engine databases.

Enterprise Developer
or Architect

View tables and data, and execute stored procedures for any database that has an
OLEDB provider or ODBC driver.
Design (create/modify) tables, views, stored procedures, triggers, and functions for
SQL Server Desktop Engine, SQL Server, and Oracle databases.

In parts of Chapters 1, 2 and 8 we use some features found only in the Enterprise Developer or Enterprise
Architect versions of Visual Studio.
In Chapter 3 we provide step-by-step instructions for installing SQL Server 2000, in case you're not familiar
with the process. We recommend that you back up or make a copy of the pubs sample database installed
with SQL Server because many of the code examples use this database and some of them modify the data
that it contains.
All the Business Cases and many of the other code samples use the Novelty database, which was designed
specifically for this book. Both Chapters 3 and 8 show the development of SQL scripts to create this
database. To use many of the code samples in the book, you must first create and populate the Novelty
database on SQL Server 2000. The steps provided here are based on the assumption that the user (you, in
most cases) logging in has the rights necessary to create a database on the server. Keep in mind that some
people may refer to a database and actually mean the application that handles the datathat is not the case
anywhere in this book. When we use the word database, we explicitly mean the container of organized,
relational data kept in SQL Server 2000.
Included in the download samples for this book, located at
http://www.awprofessional.com/titles/0672323435, is the file NoveltyDB.sql, which is used to create the
database. Complete the following steps to create the database on SQL Server 2000.

1. Open SQL Server Query Analyzer and log in to the desired SQL Server.
2. Open the NoveltyDB.sql file by first selecting the File menu and then Open and browsing to the location
of the file on your computer.
3. Once open, the SQL code is displayed in a window for you to view.
4. Click on the Execute Query item on the toolbar. It is a green arrow to the right of a checkmark icon.
5. The script will execute and create the database. You can populate the database by repeating steps 24
and replacing the filename NoveltyDB.sql with any of the SQL files with the word "Data" in the name.

4.
5.
The OrdersData.sql file is an example of the included files that will insert data into the database.
Finally, the original 1.0 release of the .NET Framework and Visual Studio.NET did not include the .NET Data
Provider for ODBC. It is included in later releases, and you can download it separately from the Microsoft
Web site (http://www.microsoft.com) if you need to do so. While there, you can also download the Microsoft
.NET Data Provider for Oracle, if you use an Oracle database, although we don't specifically discuss that
provider in this book.

[ Team LiB ]

[ Team LiB ]

Keeping in Touch
The projects, demonstrations, examples, and code used in this book, along with any future changes or
additions, can be found at http://www.awprofessional.com/titles/0672323435. E-mails from readers are
welcome; contact Jackie Goldstein at Jackie@Renaissance.co.il or webmaster@awprofessional.com.

[ Team LiB ]

[ Team LiB ]

About the Authors


Jeffrey P. McManus is a developer and speaker specializing in Microsoft tools. As a developer, he has
specialized in online application development involving Internet and client-server technologies. He is the
author of four books on database and component technologies, including the best-selling previous edition of
this book, and two books on .NET technologies. Jeffrey has been a regular speaker at VBITS/VSLive,
European DevWeek, and VBConnections Conferences.
Jackie Goldstein is the principal of Renaissance Computer Systems, a company specializing in development
and consulting with Microsoft tools and technologies. He has almost 20 years' experience developing and
managing software applications in the United States and Israel, and is experienced at helping companies
evaluate and integrate new technologies. Jackie is the MSDN Regional Director for Israel, the founder of the
Israel VB User Group, and a featured speaker at international developer events including VSLive, TechEd,
VBITS, Microsoft Developer Days, and SQL2TheMax. He has also been chosen to work with Microsoft as a
Subject Matter Expert to help review, enhance, and finalize the technical content and presentations for
worldwide Microsoft Developer Days events. Jackie has a Bachelor's degree in Electrical Engineering, and
separate Master's degrees in Electrical Engineering, Computer Science, and Management.

[ Team LiB ]

[ Team LiB ]

About the Contributor


Kevin T. Price is a senior technologist in Vienna, Virginia, specializing in security and scalability, who has
worked with all aspects of application development within the Microsoft toolset for several years. He has also
written chapters for or edited several books related to XML, security, and .NET technologies. When not
sitting at a computer, Kevin can frequently be found either in a kitchen or on a paintball field. He can be
reached via e-mail at kpcrash@patriot.net.

[ Team LiB ]

[ Team LiB ]

About the Technical Reviewers


Anjani Chittajallu obtained a Master's degree from the Indian Institute of Technology (I.I.T.-Madras) with a
major in Control Systems Engineering. She specializes in designing and developing enterprise systems with
Microsoft Technologies. Anjani currently holds an MCSD certification. She can be reached at
srianjani@hotmail.com.
Andrew J. Indovina is currently a senior software developer in Rochester, New York. With a degree in
Computer Science, he has a wide programming background, including assembly, C/C++, Visual Basic, Java,
XML, and ASP. In addition, he has cowritten two books on Visual Basic and C++ and has served as technical
editor on numerous computer books. His latest projects include developing applications with Microsoft .NET.

[ Team LiB ]

[ Team LiB ]

Acknowledgments
As this book comes to life, we would like to thank several people who helped make it happen:
Sondra Scott, our Acquisitions Editor, who got the whole thing started and met many challenges to keep it
going.
Laurie McGuire, our patient and helpful Developmental Editor.
Kevin Price, who agreed to step up and fill in chapters, under difficult circumstances.
Anjani Chittajallu and Andrew Indovina, our Technical Reviewers, who not only kept us honest, but also
provided valuable insights and ideas.
Michael Pizzo, from Microsoft, who always responded immediately to questions, with either answers or
referrals to the right people with the answers.
Our wives, children, families, and friends, who have supported us throughout and make it all
worthwhile.

[ Team LiB ]

[ Team LiB ]

Chapter 1. Database Basics


IN THIS CHAPTER

What Is a Database?
Business Cases
Tables and Fields
Manipulating Data with Objects
Data Types
Creating a Database Schema
Relationships
Normalization
Creating a User Interface in a Windows Forms Application
A database lies at the core of many business software applications. Databases are prevalent in the world of
business because they permit centralized access to information in a way that's consistent, efficient, and
relatively easy to set up and maintain. In this chapter we cover the basics involved in setting up and
maintaining a database for a business, including what a database is, why databases are useful, and how you
can use databases to create business solutions.
If you've used Visual Basic before or done any database programming, you might find this chapter to be
rather old hat. However, we do bring you up to speed on some jargon that can vary from one database
system to another.
Although database concepts tend to be the same from one database system to another, things tend to have
their own names from one vendor implementation to the next. What's called one thing in one vendor's
system is often called something completely different in another. For example, many client-server
programmers refer to queries stored in the database as views; however, Microsoft Access programmers refer
to them as queries or QueryDefs. The two are basically the same.
If you're upgrading to Visual Basic.NET (VB.NET) from a previous version of Visual Basic, you need to be
aware of several new aspects of database programming with VB.NET. It takes a fundamentally different
approach to data access than any version of Visual Basic you've ever worked with. This approach is largely
based on Internet standards, with an eye toward enabling your applications to access data online remotely.
Visual Studio.NET (VS.NET) includes a rich set of visual, intuitive tools that facilitate rapid, consistent
development of databases and allow you to be more interactive in that process. Previously, creating and
maintaining a database relied heavily on knowledge of many different tools. With .NET, you often can take
advantage of wizards that work without adding extraneous code or limiting the flexibility you need.
If you're already familiar with database development in Visual Basic 6.0, you may want to jump ahead to
Chapter 4 for information on new methods of accessing data in VB.NET.

[ Team LiB ]

[ Team LiB ]

What Is a Database?
A database is a repository of information. Although there are several different types of databases, in this
book we are concerned primarily with relational databases, currently the most commonly used type of
database. A relational database:

Stores data in tables, which in turn comprise rows, also known as records, and columns, also known as
fields.
Enables you to retrieve, or query, subsets of data from tables.
Enables you to connect, or join, tables for the purpose of retrieving related records stored in different
tables.

What Is a Database Platform?


The basic functions of a database are provided by a database platform, a software system that manages how
data is stored and retrieved. When using VB.NET, you have a number of database platforms available to you.
The primary database platform that we cover in this book is Microsoft SQL Server 2000. (For an introduction
to this database platform, see Chapter 3.) In contrast, a database engine is the actual workhorse of the
database platform. It is the component of a database platform actually responsible for executing functions
and for data management.

[ Team LiB ]

[ Team LiB ]

Business Cases
Many computer books contain long laundry lists of software features, with hastily scribbled explanations of
how they work. If you're lucky, the discussion of software relates the software to the real world in some way.
In contrast, in this book we present the software in terms of business solutions. Accordingly, many of the
chapters contain at least one business case, in which a fictional company pursues the elusive goal of office
automation in dealing with common real-world business problems. Most of the business cases in this book
follow the exploits of Jones Novelties, Incorporated, a small business just breaking into the retail souvenir,
novelty, and party-tricks business.

Business Case 1.1: Introducing Jones Novelties, Incorporated


The company's CEO, Brad Jones, recognizes that, for Jones Novelties, Incorporated, to succeed, it must
automate many of its transactions. These include customer contacts, inventory, and billing systems, and
implementation must be tailored to the business and flexible enough to change over time.
Jones recognizes that the company will rise or fall on the basis of its access to information, so he decides to
use a relational database system to manage the company's information. The design and functionality of such
a database is the focus of the rest of this chapter.

[ Team LiB ]

[ Team LiB ]

Tables and Fields


Databases consist of tables, which represent broad categories of data. If you were creating a database to
handle the accounts for a business, for example, you might create one table for customers, another for
invoices, and another for employees. Tables have predefined structures containing data that fits into them.
Tables contain records, which are individual pieces of data within a broad category. For example, a table of
customers contains information pertinent to the people that make up the client base of a business. Records
can contain almost any type of data and are retrieved, edited, and deleted through the use of stored
procedures and/or queries written in Structured Query Language (SQL).
Records comprise fields. A field represents a subdivision of data in a record. A record that represents an
entry in an address book might consist of fields for a customer's first and last name, address, city, state, zip
code, and telephone number.
You can use VB.NET code to refer to and manipulate databases, tables, records, and fields. One of the new
features of database programming with VB.NET that deserves attention is how strongly it enforces the
correct datatype. For example, available now are methods such as get String() and getInt(), which help
reduce coding by formatting the data being retrieved by specifying its datatype when the data is gathered
from the database.

Designing Your Database


To create a database, you must first determine the information that it is to keep track of. You then create
table definitions comprising fields that define the types of data you'll store. After you create this structure,
the database can then store data in the form of records.
You can't add data to a database that has no table or field definitions because the database has nowhere to
store the data. So the design of the database is crucial, particularly because changing the design of a
database can be difficult once you've implemented it.
In this book we present tables in a standard format, with the table's name at the top and the list of field
names beneath, as follows.
tblMyTable
ID
FirstName
LastName

The vertical ellipsis (dots) in the last field indicates that this table has one or more fields that we omitted for
the sake of brevity.
If you're new to the world of database programming but have used other computer applications, you might
be surprised that a database application makes you go through a few additional steps before you can start
entering data. A word processing application, for example, enables you just to open the application and

type; the details of how the file is saved are hidden in the application itself. The main reason for designing
databases ahead of time is efficiency. If a computer application knows exactly how much and what kinds of
data to store, it can store and retrieve those data optimally. As you'll learn after you create your first
100,000-record multiuser database, speed is of paramount importance in the database environment.
Anything you can do to speed the process of adding information to and retrieving it from the database is
worthwhile.
A guiding principle in database table design is to put fields related to the same category of data in the same
table. Thus all customer records go in a Customer table, the orders that those customers place go in an
Orders table, and so on.
Just because different sets of data go into different tables doesn't keep you from using them togetherquite
to the contrary. When the data you need is spread across two or more tables in a relational database, you
can access that data by using a relationship. Later in this chapter we discuss relationships; for now, we focus
on table design.

Business Case 1.2: Designing Tables and Relationships


Brad Jones has determined that Jones Novelties, Incorporated, requires a way to store information on
customers. He's reasonably sure that most orders will be repeat business, so he wants to be able to send
customers catalogs twice a year.
Jones scribbles a basic database schema on a cocktail napkin. "Here's what the business needs to keep track
of," he says:

The customer's name, address, city, state, zip code, and phone number
The customer's region of the country (Northwest, Southwest, Midwest, Northeast, South, or Southeast)
The date of the customer's last purchase
Jones figures that all this information should go into a single table, to keep the database simple. His
database developer tells him that might be possible but that he would end up with an inefficient,
disorganized, and extremely inflexible database.
The information that Jones wants to include doesn't all map directly to database fields. For example, because
a region is a function of a person's state of residence, it doesn't make sense to have a State field and a
Region field in the same table. Doing so would mean that a data-entry person would have to enter similar
information on a customer twice. Instead, it would make much more sense for the database to store a State
field in the Customer table and store information pertaining to regions in a separate Region table. If the
Region table always knows which states map to which regions, the data-entry person doesn't have to enter a
region for each customer. Instead, he can just enter the name of the state, and the Customer table can work
with the Region table to determine the customer's region.
Similarly, splitting the Name field into FirstName and LastName fields will make it easier to sort on those
fields once data has been entered into them. This aspect of the design might seem trivial, but surprisingly,
many database developers don't take it into consideration. Recovering from this kind of design flaw in a
production database is awfully hard.
So Jones and his associate determine that data on the company's customers should be stored in a table
called tblCustomer that contains the following fields.

tblCustomer
ID
FirstName
LastName
Company
Address
City
State
PostalCode
Phone
Fax
E-mail
Data pertaining to the various regions of the country is to be stored in a separate table called tblRegion. This
table contains the following fields.
tblRegion
ID
State
RegionName
The two tables are related by the State field, which exists in both tables. The relationship between the
Region table and the Customer table is a one-to-many relationship; for each record in tblRegion there can be
none, one, or many matching records in tblCustomer. (In the sections on relationships later in this chapter
we discuss in detail how to take advantage of such a relationship for retrieving records.)
Note how the database developer named the tables and fields in her preliminary table designs. First, she
named each table with the prefix tbl. Doing so enables her to distinguish, at a glance, a table from another
type of database object that can also store records. Next, note that each field name consists of full words
(instead of abbreviations) and doesn't contain spaces or other special characters such as underscores.
Although SQL Server enables you to name database objects with spaces, underscores, and other
nonalphanumeric characters, it's a good idea to avoid their use. Using them makes it difficult to remember
the exact spelling of a field name later. (You won't have to remember whether the field is named FirstName
or FIRST_NAME, for example.) Although this guideline seems like a trivial distinction now, when you start
writing code against a database consisting of 50 tables and 300 fields, you'll appreciate having named things
simply and consistently from the beginning.
One last thing missing from Jones's wish list is the answer to the question, When did this customer last
purchase something from us? The database developer decides that this information can be determined from
date values in the table that stores data pertaining to customers' orders. This table has the following
structure.

tblOrder
ID
CustomerID
OrderDate
Amount
In this table, the ID field uniquely identifies each order. The CustomerID field connects an order with a
customer. To attach an order to a customer, the customer's ID is copied into the Order table's CustomerID
field. That way, looking up all the orders for a particular customer is easy (as we demonstrate later).

[ Team LiB ]

[ Team LiB ]

Manipulating Data with Objects


Once you have created tables, you'll need a way to manipulate them. That involves entering data into tables
and retrieving data from them, as well as inspecting and modifying the structure of tables. To manipulate the
structure of a table, use data-definition commands (covered in Chapter 2). To manipulate data, use one of
two objects provided by the .NET framework: DataSet or DataReader.
A DataSet object typically represents a subset of records retrieved from the database. It is conceptually
similar to a table (or, in some cases, a group of related tables) but includes some important distinctive
properties of its own. DataSets can easily be represented as XML data and are well suited for using remote
data (as when you pass a result set from a server to a client, for example, or when you transfer data
between two back-end systems). With VB.NET, datasets aren't limited to holding data retrieved from a
database. For example, a dataset can be used to manage data contained in an XML document or
configuration file, or dynamically created from user-defined data in advanced situations.
As with ADO, VB.NET and ADO.NET provide a way to have a dataset that is either connected or
disconnected. In a disconnected dataset , information is placed in the dataset , and that dataset is
returned to the application that asked for it. At that point, the connection to the database is closed, and the
database knows nothing about what happens to the data until the application tells it to do something. For
example, suppose that you present a user with a form that contains data that she can change. When she
clicks on the button to update the data, the application has to reconnect to the database and execute code
that will change the data. Using a connected dataset lets you "lock in" the data being used and update the
information in the database almost instantaneously. We discuss this technique in detail in Chapter 5.
A DataReader object is similar to a DataSet object, but it has different performance characteristics and
capabilities. One difference is that, as its name signifies, the DataReader reads data; therefore it is a readonly means of accessing data. The DataReader also lacks a straightforward way to represent data as XML.
To keep things simpleas well as reflect real-world best practicesin this book we generally use the
DataReader object when performing basic data access operations, reserving DataSet for situations where
we really need it, such as constructing data-driven Web Services (discussed in Chapter 12).

DataSets are represented as objects, just as the typical ADODB.Recordset is an object that you might
have previously worked with in Visual Basic. And like other types of Visual Basic objects, DataSet objects
have their own properties and methods. We return to a discussion of objects that manipulate data later in
this chapter and throughout this book. For now you need only to understand that the .NET framework uses
objects to provide a clean, consistent, and relatively simple way for you to take advantage of sophisticated
database functionality in your applications.

[ Team LiB ]

[ Team LiB ]

Data Types
One of the steps in designing a database is to declare the type of each field. This declaration enables the
database engine to save and retrieve data efficiently. SQL Server provides 21 different types of data. Table
1.1 lists the data types available to you in a database application.

Table 1.1. Data Types Available in SQL Server


Data Type

Description

bigint

An eight-byte integer (whole number) in the range of 9,223,372,036,854,775,808


through 9,223,372,036,854,775,807.

binary

Used to store fixed-length binary data of up to 8,000 bytes.

boolean

A true or false value stored as 0 or 1.

char

A fixed-length character field of up to 8,000 characters.

datetime

A value that can store a date and time value between January 1, 1753, and December
31, 9999.

decimal

Fixed-precision decimal numbers. You can define the scale (number of digits to the
right of the decimal point) when you create the field. Data in a decimal field takes 5 to
17 bytes of storage.

float

Approximate decimal number with up to 53 digits to the right of the decimal. It


requires either four or eight bytes of storage, depending on the scale of the number.

image

Variable-length binary data of up to 2,147,483,647 bytes.

int

A four-byte whole number from 2,147,483,648 to 2,147,483,647.

money

A numeric field that has special properties to store monetary values accurately.

nchar

A fixed-length character field containing up to 4,000 international (Unicode)


characters.

ntext

A variable-length character field containing up to 1,073,741,823 international


characters.

nvarchar

A variable-length character field containing up to 4,000 international characters.

real

Approximate decimal number with up to 53 digits to the right of the decimal. Requires
either four or eight bytes of storage, depending on the scale of the number.

smalldatetime

A value that can store a date and time value from January 1, 1900, to June 6, 2079.

smallint

A two-byte whole number between 32,768 and 32,767.

text

A variable-length field containing up to 2,147,483,647 characters. (This kind of field is


known in some database systems, such as Microsoft Access, as a Memo field.)

tinyint

A single-byte whole number between 0 and 255.

uniqueidentifier A 128-byte number, also called a globally unique identifier. You can use this number
to identify a record uniquely; it is typically used in replication.

varbinary

Variable-length binary data of up to 8,000 bytes.

There isn't a one-to-one correspondence between VB.NET's data types and database field data types,
although the correspondence is closer in VB.NET than it was in VB6. For example, a SQL Server int data
type corresponds to a .NET integer data type; both are 32-bit integers. However, you can't directly set a
database field to a user-defined type or a Visual Basic-style Object variable.

varchar

A variable-length field containing up to 8,000 characters.

There isn't a one-to-one correspondence between VB.NET's data types and database field data types,
although the correspondence is closer in VB.NET than it was in VB6. For example, a SQL Server int data
type corresponds to a .NET integer data type; both are 32-bit integers. However, you can't directly set a
database field to a user-defined type or a Visual Basic-style Object variable.

[ Team LiB ]

[ Team LiB ]

Creating a Database Schema


Although creating a list of tables and fields is a good way to nail down the structure of the database, you
may also want to be able to look at the tables and fields in a graphical format. That will let you see not only
which tables and fields are available to you, but also how they relate to each other. You do so by creating a
schema.
A schema is a road map to your database. A schema comprises diagrams of all the tables, fields, and
relationships in your database. Including a database schema as a part of your software design is important
because it gives you a quick way to see what's going on in your database.
Schemas also are important long after the database design process has been completed. You'll need the
schema to perform multitable queries on the data. A good graphical schema answers questions such as:
Which tables do I need to join to list all the orders greater than $50.00 that came in from customers in
Minnesota in the last 24 hours? (For more information on how to create queries based on more than one
table, see Chapter 2.)
There is no one standard way to create database schemas, although there are many tools that you can use
to create them. The drawing tool Visio is flexible, fast, and easy to useand it integrates well with other
Windows applications, particularly Microsoft Office. Visio is now shipped as part of Visual Studio.NET
Enterprise Architect edition; it's also available separately.
You're not limited to using Visio when creating a graphical database schema. You can use what ever drawing
tool you're comfortable with. Microsoft Windows Paint is a workable option, as are Microsoft Word's drawing
features.

Using Visual Studio to Create a Database


There are a number of ways to create a database in SQL Server. It has its own set of tools, known as SQL
Enterprise Manager, which enables you to create databases and tables graphically or programmatically
(using SQL commands). Available are a number of external tools that enable you to work with database
structures (one of which, Visio, is described later in this chapter).
Visual Studio.NET has an outstanding facility for working with a SQL Server database. This facility is
contained in Server Explorer, a new Visual Studio feature that lets you work with all kinds of server software
in an integrated way. To use Server Explorer to create a SQL Server database do the following.

1. Launch VS.NET.
2. From the left side of the VS.NET window, select the Server Explorer tab. The Server Explorer window
appears. (Note that tabs to select Server Explorer may be vertical or horizontal.)
3. Expand the outline so that you can see your server from the Servers node. Beneath your server name
should be a SQL Servers node. Expand it to see the instance of SQL Server running on your machine,
as shown in Figure 1.1.
Figure 1.1. The Server Explorer window in Visual Studio.NET. In this window, you can
manage server processes, such as SQL Server.

4. To create a new database, right-click on the name of the SQL Server running on your machine (in
Figure 1.1, the name of the computer is ROCKO; yours, of course, will be different). From the pop-up
menu, select Create Database.
5. The Create Database dialog appears. Type the name of the database (Novelty) and click on OK.
6. The new database should appear in the Server Explorer window.
Expanding the outline view for the database that you just created reveals five categories of database objects
available from VS.NET:

Database Diagrams
Tables
Views
Stored Procedures
Functions
As you can't do much in a database without a table, first create one by doing the following.

1. In Server Explorer, right-click on the Tables node beneath the Novelty database. From the pop-up
menu, choose New Table.
2. A table design window appears. Create tblCustomer having the following fields and definitions for those
fields.

2.

Column Name

Length Allow Nulls

ID

Data Type
int [*]

No

FirstName

varchar

20

Yes

LastName

varchar

30

Yes

Company

varchar

50

Yes

Address

varchar

50

Yes

City

varchar

30

Yes

State

char

Yes

PostalCode

varchar

Yes

Phone

varchar

15

Yes

Fax

varchar

15

Yes

E-mail

varchar

100

Yes

[*]

Note that the ID field will be the identity column; that is, it will contain a unique number
(integer) for each row that contains fields in the table.
3. The information inserted in the table yields the result shown in Figure 1.2.
Figure 1.2. Creating a table definition by using Visual Studio.NET's Server Explorer

4. Click on the ID field. From the Diagram menu, select Set Primary Key. Doing so will ensure that no two

5.

4.

5.

6.
7.
8.
9.

customers in your database can have the same ID number. (We present more information on primary
keys in the next section.)
Next you'll need to make the ID field an identity column. That will cause SQL Server automatically to
generate an ID number for each of your customers. To do so, right-click on the table definition window.
From the pop-up menu, choose Indexes/Keys.
A property page for the table definition appears. Click on the Tables tab at the top of the page.
In the field Table Identity Column, choose ID.
Click on Close.
From the File menu, choose Save Table1. The Choose Name dialog appears, asking you to specify a
better name for your table. Type the name tblCustomer and click on OK. The table is saved; note that
it has been added to the list of tables in this database in Server Explorer.

Designating Indexes and the Primary Key


Now that you've created the basic table, one thing remains to be done: You must designate indexes. An
index is an attribute that you can assign to a field, making it easier for the database engine to retrieve data
based on information stored in that field. For example, if you have a database that tracks employees, your
application will probably tend to look up employees by last name, department, and individual ID number. It
makes sense then to create indexes on each of these fields, to make the process of retrieving records based
on these fields faster.
Once you've realized the benefits of indexes in database design, you might ask yourself the question: If
indexes make lookups faster, why not place an index on every field in every table? The answer is that there's
a diminishing return with indexes. Indexes make your database physically larger, so if you have too many
indexes, they will consume far too much memory and disk space, making your computer run more slowly. In
addition a lot of maintenance is involved in updating indexes, which obviously nullifies the benefit of having
an index in the first place. There's no hard and fast rule for how many indexes each table should have, but in
general, you should create indexes for the fields that you envision will be queried most often. (For more
information on how to use the information in a field as a query criterion to retrieve sets of records, see
Chapter 2.)
A primary key is a special type of index. A field that is designated as a table's primary key uniquely identifies
the record. So, unlike other types of indexes, no two records in the same table may have the same value in
its primary key field. Also, when you designate a field as a primary key, no record may contain an empty, or
null, value in that field. When you designate a field in a table as that table's primary key, you can create
relationships between that table and other tables in your database.
Every table you create should at least have a primary key, and it should also be indexed on those fields you
expect to be queried the most. In the case of tblCustomer, as with many database tables, the primary key
will be the ID field. (You should have made this field the primary key earlier when you created tblCustomer.)
The secondary indexes will be the LastName and FirstName fields.
Now you can create two more indexes, for the FirstName and LastName fields, by doing the following.

1. Right-click on the Server Explorer table design window for tblCustomer. From the pop-up menu,
choose Indexes/Keys.
2. A property page appears with a list of existing indexes. The primary key index (called PK_tblCustomer)
should already be there. Click on the New button to create a new index for the FirstName field.
3. In the list of column names, choose FirstName (as shown in Figure 1.3), then click on Close.

3.
Figure 1.3. The Table Structure dialog box, after all the fields and indexes have been
designated

4. Repeat this process for the LastName field.

Caution
At the bottom of the property page there's an option labeled "Create UNIQUE". Don't check it!
If you do, you won't be able to add two people with the same first name to the database.
Create unique indexes only when you want to ensure that two records with the same value in a
given field can't be created.
5. To save your changes to the database, choose the menu command File, Save tblCustomer. You may
close the table design window in VS.NET after saving the changes successfully.
Now that you've created the data structure of the table, you may want to enter data into it. Server Explorer
makes it easy; to work with data in the table, simply right-click on it in the Server Explorer window and
choose Retrieve Data From Table from the pop-up menu. A data-entry grid appears, as shown in Figure 1.4

Figure 1.4. Entering data into a newly created table, using the Retrieve Data From Table feature
of Server Explorer

You can enter data into this grid by typing; when you move off a row the data you enter is automatically
saved in the database. Don't bother entering data into the ID field. Remember, because you designated it as
an identity column when you created the table, the database engine will fill in an ID for you automatically
when the new record is created.
Now that you've gone through the steps to create a single table in your database, you should be able to use
Visual Studio.NET Server Explorer to model virtually any kind of database you require. However, at this
stage there is one thing that can get in your way: the ability to model complicated relationships between
multiple tables in a complex data design. You can use a database diagram to simplify that task.

Creating Database Diagrams


A database diagram is a visual representation of the tables in a database. You can use the diagramming
features provided by SQL Server to create tables and the relationships between them visually. To create a
database diagram in Visual Studio.NET's Server Explorer, do the following.

1. Under the Novelty database node in Server Explorer, right-click on the Database Diagrams node. From
the pop-up menu, choose New Diagram.
2. The Add Table dialog box appears. It shows you a list of the tables that currently exist in the database;
if you created tblCustomer earlier, it should appear here. Select it, click on Add, and then click on
Close.
3. A visual representation of the structure of tblCustomer is added to the diagram, as shown in Figure
1.5.
Figure 1.5. Diagram for the Novelty database. The tables that you choose are automatically

represented in the diagram.

4. To add a second table to this diagram, right-click in the white space around tblCustomer and select
New Table from the pop-up menu.
5. The Choose Name dialog appears. Give this new table the name tblOrder.
6. A table definition sheet appears in the diagram. Create fields for tblOrder, as shown in Figure 1.6.
Figure 1.6. Field definitions for the new tblOrder table, created in the database diagram

7. From the File menu, choose Save. A confirmation dialog appears, asking you if you want to save the
table to the database. Choose Yes. The table should appear in your database definition in Server

7.
Explorer.
Now that you have tables for customers and orders, it makes sense to document the relationship that exists
between them. Specifically, whenever an order is created, the ID of the customer will always be copied from
the customer record's ID field to the CustomerID field of tblOrder. To reflect this action in the database
diagram do the following.

1. Click on the ID field in tblCustomer and drag to the CustomerID field in tblOrder.
2. The Create Relationship dialog appears. The settings in this dialog box denote the properties of a
relationship constraint that is being created between the two tables. Once this constraint has been
created, you won't be able, for example, to create orders for customer IDs that don't exist in the
database. This constraint is generally a good thing, so click on OK to confirm its creation.
3. The database diagram is updated to reflect the new relationship, as shown in Figure 1.7.
Figure 1.7. The database diagram for the Novelty database, denoting a relationship between
tblCustomer and tblOrder

To finish your work here, choose File, Save DatabaseDiagram1 from the menu. In the Save Database
Diagram dialog box, give the diagram the name Relationships. You should receive a warning dialog indicating
that tables are about to be created in the database; answer Yes so that VS.NET can create tblOrder.
It's particularly useful that SQL Server creates and stores the database diagram within the database itself.
Thus you can always get to the diagram, even from different tools. (You can manipulate database diagrams
in SQL Enterprise Manager, as well as in VS.NET.)

Using Microsoft Visio to View and Alter a Database Schema


You may find it useful to use a graphical tool other than VS.NET to create, inspect, and modify database
schemas. The diagramming tool Microsoft Visio has the capability to diagram database structures
automatically; it can also easily reverse engineer nearly any kind of existing database structure. This
capability makes this tool particularly useful for documenting and working with the database schemas of
databases that were designed in the mists of time by programmers unknown.

Note
It isn't strictly necessary for you to know how to use Visio to set up a SQL Server database. It's
just a different way of rolling database design and documentation tasks into a single set of
operations. If you feel comfortable using Visual Studio's Server Explorer (or SQL Server's own
Enterprise Manager tools), or if you don't have access to the version of Visio that comes with
Visual Studio Enterprise Architect, you can safely skip this section.

Reverse engineering a database inspects an existing database schema and creates an entity relationship
diagram (ERD) from it. An entity relationship diagram is a type of symbolic database design that focuses on
the broad categories of data known as entities (typically stored in the database as tables).
To reverse engineer a database schema using Visio, follow these steps.

1. Start Visio 2002 for Enterprise Architects. The Choose Drawing Type panel appears; select the
Database category.
2. From the Template panel, select Database Model Diagram. The basic Visio drawing window appears, as
shown in Figure 1.8.
Figure 1.8. The basic Visio drawing window. The drawing template appears on the left, and
the drawing area is on the right. You create drawings by dragging items from the template
onto your drawing.

3.
4.

3. From the Database window, select Reverse Engineer. The Visio Reverse Engineer Wizard launches.
4. From the drop-down list of Visio drivers, select Microsoft SQL Server.
5. You'll next need to define a data source that will enable you to access the Novelty database. To do so,
click on the New button.
6. The Create New Data Source dialog box appears. It first asks you to specify which kind of data source
to create. Select System data source and then click on Next.
7. The next screen asks you to select a database driver (again). Choose SQL Server (again). You'll
probably have to scroll down the list to get to the SQL Server driver. Click on Next and then click on
Finish.
8. Another dialog box, Create a New Data Source to SQL Server, appears. Enter the name Novelty for the
data source. In the drop-down list labeled "Which SQL Server do you want to connect to?" choose
(local). Then click on Next.
9. Specify the authentication mode you use on your SQL Serverwhich you should have specified when
you installed SQL Server. (For more information on this topic, see the discussion of SQL Server
authentication modes in Chapter 3.) Then click on Next.
10. In the next screen, check the box labeled "Change the default database to:". From the drop-down list,
choose the Novelty database. Click on Next and then click on Finish.
11. The final dialog box, ODBC Microsoft SQL Server Setup, appears. It gives you the ability to test the
connection to your data source by using the information you just provided. Click on the Test Data
Source button to do run the test; it's always a good idea to be sure that it works, as a lot of
information is required to create a connection to a data source. Once the connection has been verified,
click on OK.
12. You should be back at the Reverse Engineer Wizard, and the Novelty data source should have been
automatically selected. Double-click on Next.
13. When the wizard asks you to select the tables you want to reverse engineer, check both tblCustomer

12.
13.
and tblOrder. Then click on Finish. Visio creates a diagram of your database, including the relationship
between tblCustomer and tblOrder that you defined previously. This diagram is shown in Figure 1.9.
Figure 1.9. The diagram generated by Visio's Reverse Engineer Wizard, showing the two
tables in the Novelty database and the relationship between them

At this point you may be asking yourself, Why was that process so tedious and painful? The reason is that
the Reverse Engineer Wizard kicked off a second wizard that created something called an ODBC data source.
ODBC is an old Microsoft technology for providing interoperability between relational databases for
application developers. (It's described in more depth in the VB6 edition of this book; it's not used extensively
in VS.NET, so we're not repeating that discussion here.)
The important thing to know about ODBC is that, once you've created a named ODBC data source using the
steps in this section, you don't have to do it again. The next time you need to work with the Novelty
database on your computer, you can simply use the ODBC data source that you just defined.
You may now want to add another table to the database through Visio. Recall that Brad Jones's cocktailnapkin design for this database included the ability to divide customers into regions. Thus you'll need a table

of regions, which you can add in Visio as follows.

1. From the Entity Relationship template on the left side of the Visio window, click on the Entity shape
and drag it onto the drawing area. A new entity (table) is created, initially labeled "Table1".
2. Right-click on the entity shape that you just created and then select Database Properties from the popup menu. A Database Properties sheet appears at the bottom of the Visio window.
3. Type the name of the table, tblRegion, into the Physical Name field.
4. In the list of Categories in the Database Properties sheet, click on Columns. Create the three fields in
the table definition by typing them into the grid. Note that, to denote the length of the char and
varchar fields in the table, you must select the field and click on the Edit button on the right side of the
property sheet.
When you're done, the graphic should look like Figure 1.10.
Figure 1.10. The Visio ERD diagram containing the new definition for tblRegion

Note
The preceding method is a very simple way to create a database schema; however, more involved
methods might suit your purposes better. In fact, Visio has a number of specialized templates for
creating database diagrams.

There is a relationship between the new tblRegion and the existing tblCustomer (through the State fields
that exist in both tables), which your diagram should reflect. You can create a relationship between two
tables in Visio by using the Relationship shape, as follows.

1.

1. Click on and drag a Relationship shape onto your drawing. This shape is represented as a line with an
arrow on one end. You should be able to see green squares (called handles) on each end of the line.
2. Click on and drag one of the green handles onto the entity shape for tblRegion. The handle should turn
red to indicate that it's not yet complete.
3. Click on and drag the other green handle onto the entity shape for tblCustomer.
4. In the property sheet at the bottom of the Visio window, select the State fields in both tables and then
click on the Associate button. Your diagram should look like that in Figure 1.11. Note that the button
located between the two listboxes showing you the column names of the two tables being associated
will either appear as disabled or show Disconnect or Associate. The button is enabled and the text
reads "Associate" when you have selected one column from each listbox.
Figure 1.11. The Visio ERD diagram displaying the relationship between tblOrder and
tblCustomer

Now that you've drawn the diagram for a new table in your database, you can use Visio to create the table in
the database. To do so, select the Visio menu command Database, Update. The Visio Database Update
Wizard will launch, asking you how you want to perform the update. You may want Visio simply to generate
a Data Definition Language (DDL) script that will perform the necessary changes to your database; this
decision will also have the side benefit of documenting the changes in case you need to replicate them later.
(For more information on how DDL commands work, see Chapter 2.) Or you may simply want Visio to make
the changes to the database easily. You have the option to perform either or both operations with Visio's
Update Database Wizard.
Often, creating a graphical database schema will reveal flaws in your design. For example, the database
design that you have so far enables the business to store information on customers and orders. But orders
consist of items taken from the company's inventory and sold to the customer. With your current design,
there's no way to see what the customer actually ordered.
The solution to this problem is to create a new table for items associated with an order. The design of this
new table looks like the following.
tblOrderItem
ID
OrderID
ItemID
Quantity
Cost
There is a one-to-many relationship, then, between the tblOrder table and the tblOrderItem table. The
database schema now should look like that in Figure 1.12.
Figure 1.12. The evolved database schema, including relationships among four tables in the
database

The complete Visio file is included in the downloadable source code for this book from the Addison-Wesley
Web site, www.awprofessional.com.

Note
Don't confuse the process of developing a database schema with a software design methodology.
Most successful software development organizations have a design methodology in place that
dictates what business problems the software is supposed to solve, how the software application
will look, how it will be built, and the like. You should consider all these issues before you design a
database.

[ Team LiB ]

[ Team LiB ]

Relationships
A relationship is a way of formally defining how two tables relate to each other. When you define a
relationship, you are telling the database engine which two fields in two related tables are joined.
The two fields involved in a relationship are the primary key, introduced earlier in this chapter, and the
foreign key. The foreign key is the key in the related table that stores a copy of the primary key of the main
table.
For example, suppose that you have tables for departments and employees. There is a one-to-many
relationship between a department and a group of employees. Every department has its own ID, as does
each employee. In order to denote which department an employee works in, however, you must copy the
department's ID into each employee's record. So, to identify each employee as a member of a department,
the Employees table must have a fieldsay, DepartmentIdto store the ID of the department to which that
employee belongs. The DepartmentID field in the Employees table is referred to as the foreign key of the
Employees table, because it stores a copy of the primary key of the Departments table.
A relationship, then, tells the database engine which two tables are involved and which foreign key is related
to which primary key. The old Access/JET engine doesn't require that you explicitly declare relationships, but
it's advantageous for you to do so. The reason is that it simplifies the task of retrieving data based on
records joined across two or more tables (discussed in more detail in Chapter 2). This lack of declaration is
one of the major weaknesses of the JET technology and by far a good reason to upgrade any legacy
applications still using JET to use ADO.NET. In addition to matching related records in separate tables, you
also need to define a relationship to take advantage of referential integrity, a database engine property that
keeps data in a multitable database consistent. When referential integrity exists in a database, the database
engine prevents you from removing a record when other records are related to that record in the database.
After you define a relationship in your database, the definition of the relationship is stored until you remove
it. You can define relationships graphically, using a database diagram in VS.NET, SQL Enterprise Manager, or
Visio, or programmatically with SQL DDL commands.

Using Referential Integrity to Maintain Consistency


When tables are linked through relationships, the data in each table must remain consistent with that in the
linked tables. Referential integrity manages this task by keeping track of the relationships among tables and
prohibiting certain types of operations on records.
For example, suppose that you have one table called tblCustomer and another table called tblOrder. The two
tables are related through a common fieldthe ID field in tblCustomer relates to the CustomerID field in
tblOrder.
The premise here is that you create customers that are stored in tblCustomer and then create orders that
are stored in tblOrder. But what happens if you run a process that deletes a customer who has outstanding
orders stored in the order table? Or what if you create an order that doesn't have a valid CustomerID
attached to it? An order without a CustomerID can't be shipped, because the shipping address is a function
of the record in tblCustomer. In such a situation the data is said to be in an inconsistent state.

Because your database must not become inconsistent, many database engines (including SQL Server)
provide a way for you to define formal relationships among tables, as discussed earlier in this chapter. When
you formally define a relationship between two tables, the database engine can monitor the relationship and
prohibit any operation that violates referential integrity.
Referential integrity constraints generate application errors whenever the application attempts to perform an
action that would leave data in an inconsistent state. For example, in a database with referential integrity
activated, if you attempted to create an order that contains a customer ID for a customer who didn't exist,
you'd get an error and the order wouldn't be created.

Testing Referential Integrity Constraints, Using Server Explorer


Now that you have a database with the related tblCustomer and tblOrder tables, you can use Server Explorer
to examine and test the relationship between them. To do so, follow these steps.

1. Open the database diagram for the Novelty database you created earlier. The two tables, tblCustomer
and tblOrder, should appear in the diagram. Note that, although you should have more than two tables
in the database, only the first two you created exist in this diagram. To keep diagrams simple, new
tables that you create are not automatically added to diagrams. If you want to, you can easily add
tables to this diagram to create a complete road map of the database, but for now we're interested
only in tblCustomer and tblOrder.
2. Right-click on the relationship, the line connecting the two tables. From the pop-up menu, choose
Property Pages.
3. When the property page for this relationship appears, click on the Relationships tab. The relationship
denotes a link between the ID field in tblCustomer and the CustomerID field in tblOrder. Toward the
bottom of the property page there should also be settings for constraints and cascades.

Note
By default, when you create a relationship, the relationship is enforced (for example, you can't
create an order for a nonex is tent customer) but isn't cascaded. We discuss cascading in more
detail in the next section.
4. Check the box labeled "Cascade Delete Related Records" and then click on the Close button.
5. To save your changes, select the menu command File, Save Relationships.
To test the constraint imposed by the relationship, do the following.

1. In Server Explorer, right-click on tblOrder. From the pop-up menu, select Retrieve Data from Table.
2. Enter an order for a customer with an ID of 9999. Presumably, unless you've been doing an incredible
amount of data entry on this table for no reason, there's no customer with an ID of 9999 in the
database.
3. Move off the row you're entering to attempt to save it. You should not be successfulyou should get
an error message saying, "INSERT statement conflicted with COLUMN FOREIGN KEY constraint
'FK_tblOrder_tblCustomer'. The conflict occurred in database 'Novelty', table 'tblCustomer', column
'ID'."
4. Cancel the error message and hit the Esc key to abort the record insertion.

4.
There's no need actually to enter the data (the error message is what we were looking for). However, if you
needed to create an order for some reason, you'd first need to create a customer, get the ID for that
customer, and use it in the CustomerID field when creating an order.
In a real application, this problem would be handled gracefully and automaticallyyou'd typically design a
user interface to avoid the problem in the first place. We discuss a variety of strategies to deal with
manipulating related records consistently throughout this book.

Cascading Updates and Cascading Deletes


Cascading updates and cascading deletes are useful features of the SQL Server database engine. They cause
the following things to happen in your database.

With cascading updates, when you change a value in a table's primary key, the data in the foreign keys
related to that table changes to reflect the change in the primary key. For example, if you had a
customer named Halle's and their ID was 72, and you change the ID of Halle's Hockey Mart in the
tblCustomer table from 48 to 72, the CustomerID field of all the orders generated by Halle's Hockey
Mart in the tblOrder table also changes automatically from 48 to 72. You shouldn't need to change the
key of a record (one of the central concepts of keys is that they're unique and immutable), but if you
ever do, it's nice to know that cascading updates can do the trick.
With cascading deletes, when you delete a record in a table, all the records related to that record in
other tables also are automatically deleted. Therefore, if you delete the record for Halle's Hockey Mart
in the tblCustomer table, all the orders in the tblOrder table for Halle's Hockey Mart are automatically
deleted. As you might expect, this use of cascading in a relational database is fairly common.

Note
Be cautious when setting up relationships that perform cascading updates and cascading deletes
in your data designs. If you don't plan carefully, you could wind up deleting (or updating) more
data than you intended. Some database developers avoid the use of cascades altogether,
preferring explicitly to maintain referential integrity across related tables. That's fine, but once you
get the hang of how they work, you'll probably find cascades easy to program.

Cascading updates and cascading deletes work only if you've established a relationship between two tables.
If you always create tables with AutoNumberor, in SQL Server terms, AutoIncrement primary keysyou'll
probably find that cascading deletes is more useful than cascading updates. The reason is that you can't
change the value of an AutoNumber or AutoIncrement field (so there's no "update" to "cascade").
You can examine how cascading deletes work by using the tools provided by Server Explorer, as follows.

1. Previously you designated the relationship between tblCustomer and tblOrder to support cascading
deletes. (If you want to confirm this relationship, use the Database Diagram that you created
previously.)
2. Create a customer by right-clicking on tblCustomer in the Tables folder and then selecting Retrieve

2.

3.

4.
5.
6.

Data from Table from the pop-up menu. Note the ID that the database engine assigns to the newly
created customer; you'll need it shortly when you create orders for this customer. Leave this table
open because you'll be returning to it in a moment.
Open the tblOrder table and create two or three order records for the customer you just created. To
relate each order to the customer, enter the customer's ID in the CustomerID field. Leave this table
open as well.
Go back to the tblCustomer data-entry grid and delete the customer record by right-clicking on the
gray row selector on the far left side of the row and then choosing Delete from the pop-up menu.
Visual Studio.NET displays a warning message asking you if you really want to delete the data. Answer
Yes.
Go back to the tblOrder window. Whoops, you probably expected that the orders you entered for this
customer would have been deleted. But they're still therewhat happened? Actually, they were
deleted; you're just looking at an outdated view of the data. To refresh the data, select the menu
command Query, Run. The data-entry grid refreshes itself by refetching the data from the database,
revealing that the order records for the customer you deleted were automatically deleted thanks to the
magic of cascading.

[ Team LiB ]

[ Team LiB ]

Normalization
Normalization is related conceptually to relationships. Basically, normalization dictates that your database
tables eliminate inconsistencies and minimize inefficiency.
Recall that databases are called inconsistent when data in one table doesn't match data in another table. For
example, if half your staff thinks that Arkansas is in the Midwest and the other half thinks it's in the
Southand if both factions handle data entry accordinglyyour database reports on how things are doing in
the Midwest will be meaningless.
An inefficient database doesn't allow you to isolate the exact data you want. A database in which all the data
is stored in one table might force you to slog through myriad customer names, addresses, and contact
histories just to retrieve one person's current phone number. In contrast, in a fully normalized database each
piece of information in the database is stored in its own table and is identified uniquely by its own primary
key. Normalized databases allow you to reference any piece of information in any table if you know that
information's primary key.
You decide how to normalize a database when you design and initially set it up. Usually, every thing about
your database applicationfrom table design to query design and from the user interface to the behavior of
reportsstems from the way you've normalized your database.

Note
As a database developer, sometimes you'll come across databases that haven't been normalized
for one reason or another. The lack of normalization might be intentional (as it's often possible to
trade good normalization for other benefits, such as performance). Or it might be a result of
inexperience or carelessness on the part of the original database developer. At any rate, if you
choose to redesign an existing database to enforce normalization, you should do so early in your
development effort (because everythingelse you do will depend on the table structure of the
database). Additionally, you will find SQL data-definition language commands to be useful tools in
fixing a deficiently designed database. DDL commands enable you to move data from one table to
another, as well as add, update, and delete records from tables based on criteria you specify.

As an example of the normalization choices you have to make during the database design phase, consider
the request made by Brad Jones in Business Case 1.2. His business needs a way to store both a customer's
state of residence and the region of the country in which the customer lives. The novice database designer
might decide to create one field for state of residence and another field for region of the country.

tblCustomer
ID
FirstName
LastName
Address
Company
City
State
PostalCode
Phone
Fax
E-mail
Region
This structure might initially seem rational, but consider what would happen if you try to enter data into an
application based on this table.
You'd have to enter the normal customer informationname, address, and so onbut then, after you'd
already entered the customer's state, you'd have to come up with the customer's region. Is Arkansas in the
Midwest or the South? What about a resident of the U.S. Virgin Islands? You don't want to leave these kinds
of decisions in the hands of your data-entry peopleno matter how capable they might bebecause if you
rely on the record-by-record decisions of human beings, your data will ultimately be inconsistent. And
defeating inconsistency is one of the primary reasons for normalization.
Instead of forcing your data-entry people to make a decision each time they type in a new customer, you
want to them to store information pertaining to regions in a separate table, tblRegion.
tblRegion
ID
State
Region
The State and Regional data in such a table would be recorded as follows.
State

Region

AK

North

AL

South

AR

South

AZ

West

In this refined version of the database design, when you need to retrieve information about a region, you
would perform a two-table query with a join between the tblCustomer and tblRegion tables, with one
supplying the customer's state and the other identifying the region for that state. Joins match records in
separate tables that have fields in common. (See Chapter 2 for more information on how to use joins.)
Storing information pertaining to regions in a single table of its own has many advantages, including the

following.

If you decide to carve a new region from an existing region, you simply alter a few records in the
tblRegion table to reflect the change, not the thousands of records that might exist in the tblCustomer
table.
Similarly, if you started doing business in regions other than the 50 states, you can easily add a new
region to accommodate changes in how your business is structured. Again, you'd need to add only a
single record for each new area to tblRegion. That record then becomes available immediately
throughout your system.
If you need to use the concept of regions again somewhere else in your database (to denote that a
sales office located in a particular state served a particular region, for example), you could reuse
tblRegion without modification.
In general, then, you should always plan on creating distinct tables for distinct categories of information.
Devoting time to database design before you actually build the database will give you an idea as to which
database tables you'll need and how they relate to each other. As part of this process, you should map the
database schema, as discussed in the Creating a Database Schema section earlier in this chapter.

One-to-One Relationships
Say that your human resources database contains tables for employees and jobs. The relationship between
employees and jobs is referred to as a one-to-one relationship because for every employee in the database
there is only one job. One-to-one relationships are the easiest kind of relationships to understand and
implement. In such relationships, a table usually takes the place of a field in another table, and the fields
involved are easy to identify.
However, a one-to-one relationship is not the most common relationship found in most mature database
applications, for two reasons.

You can almost always express a one-to-one relationship without using two tables. You might do so to
improve performance, although you lose the flexibility of storing related data in a separate table. For
example, instead of having separate employees and jobs tables, you could store all the fields related to
jobs in the employees table.
Expressing a one-to-one relationship is nearly as easy as (and far more flexible than) expressing a
one-to-many relationship, for reasons we'll go into in the next section.

One-to-Many Relationships
More common than a one-to-one relationship is a one-to-many relationship, in which each record in a table
can have none, one, or many records in a related table. In the database design we created earlier, there's a
one-to-many relationship between customers and orders. Because each customer can have none, one, or
many orders, we say that a one-to-many relationship exists between tblCustomer and tblOrder.
Recall that, to implement this kind of relationship in a database design, you copy the primary key of the
"one" side of the relationship to the table that stores the "many" side of the relationship. In a data-driven
user interface, this type of relationship is often represented in a master/detail form, in which a single

("master") record is displayed with related ("detail") records displayed in a compact grid beneath them. In a
user-interface design, you'll usually copy the primary key of one table to the foreign key of a related table
with a list box or combo box.

Many-to-Many Relationships
A many-to-many relationship takes the one-to-many relationship a step farther. The classic example of a
many-to-many relationship is the relationship between students and classes. Each student can have multiple
classes, and each class has multiple students. (Of course, it's also possible for a class to have one or no
students, and it's possible for a student to have one or no classes.)
In our business example, there's a relationship between orders and items. Each order can comprise many
items, and each item can appear on many orders.
To set up a many-to-many relationship, you must have three tables: the two tables that store the actual
data and a third table, called a juncture table, that stores the relationship between the two data tables. The
juncture table usually consists of nothing more than two foreign keysone from each related
tablealthough sometimes it's useful for the juncture table to have an identity field of its own in case you
need to access a record in the table programmatically.
An example of a many-to-many relationship is to configure the business database to store multiple items per
order. Each order can have multiple items, and each item can belong to an unlimited number of orders.
These tables would look like those shown in Figure 1.13.
Figure 1.13. Tables involved in a many-to-many relationship. In this design, tblOrderItem is the
juncture table.

[ Team LiB ]

[ Team LiB ]

Creating a User Interface in a Windows Forms Application


The developers of previous versions of Visual Basic pioneered the concept of data binding, in which a data
connection object (known as a data control) gave designers the ability to create simple, data-driven user
interfaces with a minimum of programming. The good news is that the concept of data binding still exists in
Visual Basic.NET. The even better news is that many of the aspects of data binding that previously frustrated
designers have been improved or done away with in .NET.
In the past, a designer managed the connection between a Visual Basic form and a database with data
controls. These controls also provided basic data browsing functionality, enabling an application to navigate
through a set of data and add and update records.
In .NET, maintaining a connection to a database and perusing records is handled by automatically generated
code. This feature has a number of advantages, including the following.

1. Because it's presented in the form of code rather than an abstract data control, you have far more
control over how data access is managed.
2. If you're just learning how to use .NET classes to access data (which you presumably are, as you're
reading this book), you can inspect the automatically generated code to see how it's done.
3. The primary functions of the old VB data controlestablishing a connection, querying the database,
and manipulating dataare broken into separate objects, each of which can be configured, used, and
reused separately.
If you followed the demonstrations earlier in this chapter, you should now have a functional database with
some data in it. It should be sufficient for you to experiment with building various kinds of data-bound user
interfaces. In the next several sections we demonstrate how to build Windows Forms applications that
connect to your database.

Connecting to a Database and Working with Records


Creating a Windows Forms application that accesses data is quite simplein fact, if all you're interested in
doing is browsing the database, you don't even have to write a single line of code. It's a matter of first
creating a connection to the database and then binding user interface controls to the data source generated
by VS.NET. You can do so as follows.

1. In VS.NET, start a new Windows Forms project. A new form, Form1, appears.
2. In the Server Explorer window, locate the table called tblCustomer that you created earlier. Click on
and drag the table from the Server Explorer window onto your form.
3. Two objects appear at the bottom of the window beneath Form1: SqlConnection1 and
SqlDataAdapter1.
These are two of the three objects required to retrieve and display datathe SqlConnection1 object

3.

creates a connection to the database, and the SqlDataAdapter1 object is responsible for retrieving data
from the database. The third object is the DataSet object, which actually stores the data retrieved by the
data adapter. To create a DataSet to bind to data, do the following.

1. From the Data menu, select Generate DataSet; its dialog box appears.
2. Accept the default settings and click on OK. A new DataSet object is created alongside the
SqlConnection1 and SqlDataAdapter1 objects created previously.
To view data in the form, you'll next need to create a user interface control on the form and bind it to the
DataSet object that you just created. To do so, take the following steps.

1. From the Windows Forms section of the Visual Studio.NET toolbox, click on the DataGrid object and
drag it onto Form1. An instance of the DataGrid object should appear.
2. If it's not visible, open the Properties window (by choosing the menu command View, Properties
Window). Set the DataGrid's DataSource property to the name of the DataSet you created
(DataSet11). Set the DataMember property to the name of the table (tblCustomer). The DataGrid
should change to display the fields in tblCustomer.
3. Finally, you'll need to write a line of code to retrieve the data from the database and populate the
DataSet object. To do so, double-click on the form; the event procedure Form1_Load should appear in
a code window. Enter the following code.

Private Sub Form1_Load(ByVal sender As System.Object, _


ByVale As System.EventArgs) Handles _
MyBase.Load
SqlDataAdapter1.Fill(DataSet11)
End Sub

4. Choose the menu command Debug and begin to run the application. Data from your database should
be displayed in a grid.
You may notice one thing in particular about this application: Although it appears that you can make changes
to the data, any changes that you do make won't be committed to the databasein other words, they won't
be saved. To save data, you'll need to write code to call a method of the DataAadapter object in your proj
ect. We discuss this task in the Updating Records section later in this chapter.

Creating a Data Browser Application


The preceding demonstration shows the easiest type of data bindingretrieving an entire table and
displaying it in a DataGrid control. But what about displaying data one record at a time? To do that, you'll
have to use a combination of TextBox controls, Button controls, and code.
To create a data browser application to view records in the customer table one record at a time, do the
following.

1. Create a new Windows Forms project. On Form1, create two text boxes. Name the first text box
txtFirstName and the second text box txtLastName.
2. Create SqlConnection, SqlDataAdapter, and DataSet objects that retrieve the contents of the

1.
2.
customer table, tblCustomer. (The steps to do so are exactly the same as in the preceding
demonstration.) Don't forget also to call the Fill method of the SqlDataAdapter object in code to
initialize the DataSet object as you did previously.
3. Next, create a binding between the two text boxes and the appropriate fields in tbl-Customer. To do
so, click on txtFirstName to select it; then select the property (DataBindings) in the text box's property
sheet. Expanding the (DataBindings) property reveals several settings that enable you to bind the data
from the table to any property of the text box you want.
4. You now need to bind the FirstName field in tblCustomer to the Text property of the text box
txtFirstName. To do so, click on the drop-down menu to the right of the Text setting in (DataBindings);
then click to expand the outline beneath dsCustomer1, selecting the FirstName field under
tblCustomer. The Properties window should look like the one shown in Figure 1.14.
Figure 1.14. Creating a data binding between a database field and a text box by using the
text box's DataBindings property

5. Bind the text box txtLastName to the database field LastName the same way you bound txtFirstName.
6. Run the application. The first and last name of the first customer should appear.
This application is limitedat this point, you can view only a single record, and once again, you can't change
data or create new customer records. But this application is a good start. We build on it in the next few
demonstrations, adding capabilities that transform the simple data browser into a real database application
with the ability to manipulate data.
Even though this application isn't complete yet, you can already see the power of data binding in .NETit's
much more flexible and granular than the data-binding options provided in VB6. For example, the ability to

manage the process of binding entirely in code offers a great deal of flexibility.
Next you'll need to add code to enable you to navigate from one record to the next. To do so, do the
following.

1. Create two buttons on the form, one called btnPrevious and the other called btnNext.
2. Double-click on btnNext to expose its Click event procedure definition. Insert the following code for this
event procedure.

Private Sub btnNext_Click(ByVal sender As Object, _


ByVal e As EventArgs) Handles btnNext.Click
Me.BindingContext(DsCustomer1, "tblCustomer").Position += 1
End Sub

3. In the Click event procedure for btnPrevious, write the following code.

Private Sub btnPrevious_Click(ByVal sender As Object, _


ByVal e As EventArgs) Handles _
btnPrevious.Click
Me.BindingContext(DsCustomer1, "tblCustomer").Position= 1
End Sub

4. Run the application again. You should be able to move backward and forward through the customer
table, one record at a time. (Note that this procedure will work only if you have more than one
customer record in the table.)
The BindingContext object provides navigational capabilities for a data-bound application. If you've
created data-bound applications in previous versions of Visual Basic, you know that the Data control was
responsible for navigating from one record to the next. In the .NET framework, however, the
BindingContext object has been factored out of the process of data binding. (Put simply, factoring out an
object entails taking one large object and breaking its functionality into two or more simpler objects.) In
object design, a software designer typically factors out functionality when an object becomes too
complexor in cases where more granular access to programmatic functionality would provide more
flexibility for the developer.
So in the case of the data browser application, rather than providing one giant Data object that is
responsible for querying, updating, navigating, and binding fields to user interface controls, Windows Forms
and ADO.NET provide separate objects for each of these capabilities. The way that ADO.NET factors out data
access functionality is a key theme of the .NET framework, and it's one that we return to repeatedly
throughout this book.
The BindingContext object is a member of the Windows Forms family of objects (specifically, a member of
the System.Windows.Forms namespace in the .NET framework). It has a number of useful properties and
methods. For example, you can use the BindingContext property to determine how many records exist in
the data source, as follows.

1. Create a Label control on the form. Name the control lblDataStatus, and clear the control's Text
property.
2.

1.
2. In the code behind the form, create a subroutine called ShowDataStatus that displays the current
record position and the total number of records in lblDataStatus. The code for this subroutine should
look like the following.

Private Sub ShowDataStatus ()


With Me.BindingContext(DsCustomer1, "tblCustomer")
lblDataStatus.Text = "Record " & .Position + 1 & "of" & .Count
End With
End Sub

3. Place calls to ShowDataStatus from all of the event procedures in your application (the Load event of
Form1, as well as the Click events of the two navigation buttons). Doing so will ensure that the
display is updated when the application is first loaded each time you move the current record. Note
that, because the Position property of the BindingContext object is zero-based (as all .NET
collections are), you must add 1 to it for its value to make sense.
4. Run the application and browse the data in your application. The label should display both the current
record number and the total number of records in the customer table.

Performing Binding Programmatically


In Windows Forms, you can perform data binding programmatically. Doing so gives you added flexibility in
situations where the arrangement of fields isn't known at design time, or if you simply want to express the
relationship between bound controls and data fields explicitly in code rather than in the properties window.
To create a binding to a UI control, use the Add method of the DataBindings collection contained by each
Windows Forms control. For example, to assign bindings programmatically in the data browser application,
amend Form1's Load event to read as shown in Listing 1.1.
Listing 1.1 Programmatically clearing and resetting the data bindings of the data browser
application

Private Sub Form1_Load(ByVal sender As Object, ByVal e As EventArgs)


Handles MyBase.Load
txtFirstName.DataBindings.Clear()
txtLastName.DataBindings.Clear()
txtFirstName.DataBindings.Add("Text", DsCustomer1, "tblCustomer.LastName")
txtLastName.DataBindings.Add("Text", DsCustomer1, "tblCustomer.FirstName")
SqlDataAdapter1.Fill(DsCustomer1)
ShowDataStatus()
End Sub

Note that the calls to the Clear method of the controls' DataBindings collections aren't necessary in every
application you create; they're necessary only in this case because you defined data bindings by using the
Properties window previously. As an alternative to clearing the data bindings by using the Clear method,
you could have instead removed the DataBindings assignments you originally made in the Properties
window.

The Add method of the DataBindings collection takes three parametersa property of the control to bind
to, a data source object (typically, but not necessarily, a DataSet ), and a reference to a member of the data
source object that provides the data. When you run the application after making this change to the Load
event of the form, the bindings for the data browser application should have been reversedeach customer's
last name now appears first, and the first name appears in the second text box.

About Data-Aware Controls in .NET


A data-aware control is any control that has a DataBindings collection. The DataBindings property refers
to any one of a number of data types, including (but not limited to) a relational data source.
The DataBindings property connects (or "binds") the user-interface control to the data source. The userinterface control is therefore said to be bound to the database through the data control.
In previous versions of Visual Basic, a relatively limited subset of user-interface controls could be bound to
data sources. For those controls that could be bound to data, the options were very limiteda developer
could generally bind only to data sources for which an ADO provider existed. For non-data-aware controls, a
developer had to write tedious, repetitive code to perform data binding manually. In .NET, nearly every
Windows Forms control can be data-bound, including complex controls such as the Windows Forms TreeView
control. Even better, the developer isn't limited to relational data sources, or even to data sources that
Visual Studio and ADO.NET know about. Any object that implements the .NET IList interface can be bound to
data, including DataSets, as we've already shown, and more mundane constructs such as many types of
arrays and collections.

Updating Records in the Data Browser Application


So far, you've easily been able to retrieve and browse data from the database examples presented. You've
also been able to make changes to the data in the user interfaceafter a fashion. However, the changes
you've made haven't been saved (or committed) to the database. It seems intuitive that, in a data-bound
user interface, updates would happen automaticallythe various data controls provided in previous versions
of Visual Basic certainly had no trouble making them. Why doesn't data binding in .NET Windows Forms work
this way?
Committing updates back to a data source in a bound user interface requires a single line of code in Windows
Forms for several good reasonsmainly flexibility and performance. To understand why, it's helpful to have
an understanding of how the DataSet object works in .NET.
Take a look at Figure 1.15. This diagram shows the relationship between the form, the DataSet object, and
the database in a data-bound Windows Forms application.
Figure 1.15. A diagram of the relationship between a bound form, the DataSet object it contains,
and the database

In the data browser application we've created, data initially resides in a database. It is then extracted from
the database and stored in memory in the DataSet object. The form, which is bound to fields in the data
table contained in the DataSet object, detects that new data has appeared and automatically displays the
contents of data fields in bound controls.
In a data-bound application, changing the contents of a bound control affects the DataSet changing the
contents of a text box, changing the value of the row in the data table contained in the DataSet object. But
the changes in the DataSet aren't copied back to the database and stored persistently until you explicitly
tell the DataSet to do so (by calling the DataSet 's Update method). Although this instruction might seem
like a needless extra step (you never had to do it with the data controls provided by previous versions of
Visual Basic), it's actually a powerful feature of .NET. Why? Because you don't need to update until it's
appropriate to do soand while the user is editing data, the application doesn't maintain a connection to the
database.
Listing 1.2 shows a pair of modified event procedures that enable editing in the data browser application.
Listing 1.2 Saving data by updating the DataSet object as the user navigates in the data browser
application

Private Sub btnNext_Click(ByVal sender As Object, ByVal e As EventArgs) _


Handles btnNext.Click
Me.BindingContext(DsCustomer1, "tblCustomer").Position += 1
SqlDataAdapter1.Update(DsCustomer1)
ShowDataStatus()
End Sub
Private Sub btnPrevious_Click(ByVal sender As Object, ByVal e As _
EventArgs) Handles btnPrevious.Click
Me.BindingContext(DsCustomer1, "tblCustomer").Position = 1
SqlDataAdapter1.Update(DsCustomer1)
ShowDataStatus()
End Sub

Of course, updating each record as the user navigates from one record to the next isn't necessary. Because
you have programmatic control of when the DataSet is updated, you could instead choose to commit
changes back to the database when a user clicks a Save button or menu command. Or you can put off
updating entirely until several records have been changedthis proce dure is known as batch updating. In
ADO.NET writing extra code to perform batch updates isn't necessary. It's all handled by the DataSet object
(which stores the data in memory) and the SQLDataAdapter object (which is responsible for performing the
necessary database commands to ensure that the correct view of data is displayed and that data is inserted,
updated, and deleted properly). We consider further the relationship between these objects in Chapters 5

and 6.

Creating New Records in a Data-Bound Form


To create a new record in a data-bound Windows Forms application, use the AddNew method of the form's
BindingContext object. When you execute this method, any bound controls are cleared, allowing new data
to be entered. When new data has been entered, you commit the new record back to the database by
executing the Update method of the DataAdapter object (as in the preceding example).
To add the ability to create new records in the data browser application, do the following.

1. Create a new button on the form. Name the button btnNew and assign the word "New" to its Text
property.
2. In btnNew's Click event procedure, type

Private Sub btnNew_Click(ByVal sender As Object, ByVal e As _


EventArgs) Handles btnNew.Click
Me.BindingContext(DsCustomer1, "tblCustomer").AddNew()
txtFirstName.Focus()
ShowDataStatus()
End Sub

3. Run the application and click on the New button. After the bound controls in the user interface clear,
you should be able to enter a new record in the form. To save the record, move off the new record
using the Previous or Next navigation buttons.
Note that, although the Next and Previous navigation buttons perform an update on the DataSet , you don't
need to update the DataSet explicitly after you create a new recordnavigating off the new record is
sufficient. However, if you exit the application after creating a new record, but before committing it back to
the database (either implicitly, by navigating to a new record, or explicitly, by calling the Update method of
the DataAdapter object), the data in the new record will be lost.
Generally, you should provide a way to cancel an edit if a new record is created or an existing record is
changed. To do so, use the CancelCurrentEdit method of the BindingContext object.

Deleting Records from a Data-Bound Form


To delete records from a data-bound Windows Forms application, use the RemoveAt method of the
BindingContext object. This method takes a single parameter, the index number of the record you want to
delete. In a bound data browser application like the one used as an example in this book, you'll nearly
always want to delete the current record. To do so, pass the Position property of the BindingContext
object as the parameter. Listing 1.3 shows an event procedure that deletes data.
Listing 1.3 Deleting data in the data browser application, using the RemoveAt method of the
BindingContext object

Private Sub btnDelete_Click(ByVal sender As Object, ByVal e As _


EventArgs) Handles btnDelete.Click
If MsgBox("Whoa bubba, you sure?", MsgBoxStyle.YesNo, _
"Delete record") = MsgBoxResult.Yes Then
With Me.BindingContext(DsCustomer1, "tblCustomer")
.RemoveAt(.Position)
End With
End If
End Sub

This code is based on the earlier creation of a button called btnDelete. Note that this procedure asks users if
they really want to delete a record; this query is a good practice, especially if your user interface is
constructed in such a way that users easily could accidentally click on the Delete button. (Note, however,
that displaying a message box isn't the only way to handle the problem of accidental deletions. A more
sophisticated application might provide "undo" functionality that lets users back up if they make a mistake.
Constructing this kind of feature is beyond the scope of this chapter, but it's something to consider.)
Note that the RemoveAt method is smart enough not to throw an error when you call it in inappropriate
situations (as when there's no data or after the bound controls are cleared following a call to AddNew). This
capability is a vast improvement over the data controls provided by previous versions of Visual Basic, which
forced you to write tedious code to catch the many errors that could occur when a user did something
unexpected.

Validating Data Entry in a Data-Bound Form


In database programming, validation ensures that data entered into the system conforms to rules defined by
the design of your application. These rules are called validation rules. One way to implement validation when
you're programming with a bound Windows Forms application is to write code in the RowUpdating event of
the DataAdapter object. This event is triggered just before a row is updated (a corresponding event,
RowUpdated, is triggered immediately after a row is updated). By placing validation code in the
RowUpdating event, you can be sure to catch any change made to the data, no matter which part of the
application is responsible for it.
The key to using the RowUpdating event effectively is to work with the properties and methods of its event
argument. This object is an instance of System.Data.SqlClient.SqlRowUpdatingEventArgs. In
addition to inspecting the command associated with the row update (through the object's Command
property), you can inform the data adapter to skip an update and roll back changes to bound controls.
Listing 1.4 shows this approach for the data browser application we've been creating throughout this
chapter.
Listing 1.4 Performing row-level validation by handling the RowUpdating property of the
DataAdapter object

Private Sub SqlDataAdapter1_RowUpdating(ByVal sender As Object, _


ByVal e As System.Data.SqlClient.SqlRowUpdatingEventArgs) _
Handles SqlDataAdapter1.RowUpdating
If e.Row.Item("FirstName").Length = 0 Or e.Row.Item("LastName").Length = 0
Then
MsgBox("Change not saved; customer must have a first and last name.")

e.Status = UpdateStatus.SkipCurrentRow()
e.Row.RejectChanges()
End If
End Sub

Passing the enumerated value UpdateStatus.SkipCurrentRow to the Status property of the event
argument tells the data adapter to abort the operationthat is, to abort the update to the data, because it
didn't pass the validation rule. But simply aborting the data operation isn't enoughat this point, you have a
blank text box in the user interface (and a corresponding blank field in the DataSet object). To resolve this
problem, call the RejectChanges method of the Row object contained in the event argument. Doing so
refreshes the bound user interface and tells the DataSet object that this row no longer needs to be
reconciled with the database. All is well; you can now go on editing, and the data is safe.

Validation at the Database Engine Level


In addition to performing validation at the time data is entered, remember that you can also perform
validation at the database engine level. Such validation is usually more reliable than doing it when entering
data because the validation is applied regardless of the kind of client process responsible for changing the
data. You don't have to remember to implement the validation rule in every software application that
accesses a particular table. But validation at the database engine level is less flexibleit's nearly impossible
to override, and it's also much more primitive (typically limited to preventing empty values from being
entered in fields). Additionally, you can generally perform database engine validation only at the field level;
you can't have database engine validation rules that, for example, are based on a comparison between two
fields (unless the comparison is a primary/foreign key constraint or implemented with a server-side
procedure such as a trigger).
Database engine validation is a function of database design. For example, suppose that you want to ensure
that a customer is never entered into tblCustomer without a first and last name. To do so, set up a database
enginelevel validation rule, as follows.

1. In Visual Studio.NET's Server Explorer, open the table design for tblCustomer.
2. In the Allow Nulls column, uncheck the boxes for the FirstName and LastName fields.
3. From the File menu, select Save tblCustomer.
From this point on, no software process that uses this database can enter a customer record that lacks either
a first or last name. (Any attempt to do so will cause an exception to be thrown.)

[ Team LiB ]

[ Team LiB ]

Summary
In this chapter we covered the basics of databases, as well as the easiest ways of creating Visual Basic.NET
applications to display and manipulate data stored in SQL Server. One of the key points to remember is that
correct database construction can have a significant impact on the overall performance and usability of an
application. Normalization, referential integrity, and indexing can be very beneficial when applied. However,
too much indexing can actually create more work for the database, offsetting the benefit provided. As you go
through the next few chapters and consider the business cases presented, keep these points in mind.

Questions and Answers

Q1:

In VB6 I built quick data prototypes using the data control. Is there a data control in
VS.NET?

A1:

No. All the functionality of the data control from VB6 and previous has been factored into the
various data objects that we discussed in this chapter. For example, the ability of a data control
to connect to a database is now handled by the SqlConnection object. The data control's
ability to retrieve, update, and delete records is managed by the BindingContext object in
conjunction with the DataAdapter object. Navigating from one object to the next is the
responsibility of the BindingContext object, and so on. Unlike the old data controls, none of
these objects have any visual representation at run time, which works to your advantage,
enabling you to build whichever kind of data-driven user interface you want.

Q2:

Is it possible to have a primary key comprise more than one field?

A2:

Yes. Although not often done, in the database environment, it is known as a concatenated key.
For example, you might use such a key if you know that all the people in your database are
going to have a unique combination of first and last names. You choose to make the FirstName
and LastName fields the concatenated primary key so that users can never enter the same name
twice in the database.

[ Team LiB ]

[ Team LiB ]

Chapter 2. Structured Query Language Queries and Commands


IN THIS CHAPTER

What Is a Query?
Testing Queries with the Server Explorer
Retrieving Records with the SELECT Clause
Designating a Record Source with the FROM Clause
Specifying Criteria with the WHERE Clause
Sorting Results with ORDER BY
Displaying the Top or Bottom of a Range with TOP
Joining Related Tables in a Query
Performing Calculations in Queries
Aliasing Field Names with AS
Queries That Group and Summarize Data
Union Queries
Subqueries
Manipulating Data with SQL
Using Data Definition Language
The discussion of database and table structure in Chapter 1 demonstrated how to create a database by using
VB.NET and SQL Server. In this chapter we're concerned with manipulating data in tables and creating and
changing the structure of tables by using Structured Query Language (SQL).
SQL queries give you the ability to retrieve records from a database table, match related data in multiple
tables, and manipulate the structure of databases. SQL queries are also used when you manipulate
databases in code.
SQL is a standard way of manipulating databases. It's implemented in various forms in many relational
database systems, including Microsoft Access and SQL Server, and systems provided by other vendors such
as Oracle and IBM. (In fact, IBM gets the credit for inventing SQL.) Generally, SQL is used for creating
queries that extract data from databases, although a large subset of SQL commands perform other functions
on databases, such as creating tables and fields.
Generally, SQL commands fall into two categories:

Data Definition Language (DDL) commands, which are used to create and alter components of the
database, such as the structure of tables, fields, and indexes
Data Manipulation Language (DML) commands, designed to retrieve, create, delete, and update
records in databases
In this chapter we demonstrate how to use both kinds of commands.

[ Team LiB ]

[ Team LiB ]

What Is a Query?
A query is a database command that retrieves records. Using queries, you can pull data from one or more
fields from one or more tables. You can also subject the data you retrieve to one or more constraints, known
as criteria, that serve to limit the amount of data you retrieve.
In Visual Basic.NET, database queries are written in SQL. SQL is a fairly standard language for retrieving and
otherwise manipulating databases; it's easy to learn and is implemented in many different databases, so you
don't have to learn a totally new query language if, for example, you migrate your database application from
SQL Server to Sybase or Oracle.
At least that's the theory. In practice, as with so many other "industry standards," every database vendor
has its own way of implementing and extending a standard, and Microsoft is certainly no exception. Alhough
SQL Server's implementation of SQL isn't radically different from those of other vendors', you should be
aware as you learn the language that other dialects of SQL exist. In particular, if you're starting to use SQL
Server for the first time after using Microsoft Access, you'll need to watch out for a number of pitfalls in SQL
syntaxwe highlight them specifically as they come up.

[ Team LiB ]

[ Team LiB ]

Testing Queries with the Server Explorer


Visual Studio.NET's Server Explorer is a useful tool for trying the queries described in this chapter. Use the
steps presented here to create a test view that you can use to test SQL statements as you work through this
chapter.
To follow these examples, you should have access to a SQL Server. If not, see Chapter 3 for instructions on
how to install and run SQL Server. We also assume that you've added your SQL Server installation to your
Server Explorer window, as described in Chapter 1.
Creating a test view with the Visual Studio Server Explorer involves seven steps.

1. In VS.NET, create a new Windows Forms project.


2. In the Server Explorer window, locate your SQL Server and open the Novelty database you created
earlier. The database should contain folders for a number of objects, such as database diagrams,
tables, and views.
3. Right-click on the Views folder. From the pop-up window, select New View.
4. The Add Table dialog box appears, showing a list of tables in your database. Select tblCustomer and
then click on Add. A graphical representation of the table should appear in the view design window.
5. Click on Close to dismiss the Add Table dialog. A View Design window appears; it comprises four
panes: a diagram pane, a field grid, an SQL pane, and a results pane, as shown in Figure 2.1.
Figure 2.1. The View Design window

6. Check the FirstName, LastName, and Address columns in tblCustomer. The query is built as you check
each field; the query grid and SQL panes change when you check on fields in the list.
7. From the VS.NET Query menu, select Run. The data grid fills with data, as shown in Figure 2.2.
Figure 2.2. The View Design window after the query has been run

You can save this query in case you want to run it again. Queries saved in the database are known as views.
For the most part, you can use them just like tables in your database applications. This handy feature can
help you manage complexity in your database application, particularly for queries that involve a number of
joined tables (as we show later in this chapter).
To save a view in VS.NET, use the menu command File, Save View1. VS.NET will prompt you to give the view
a namesay, qryCustomerList. Once you have saved the view, it is stored in the database, ready for use by
any programmer with access to the database.

Note
You may have noticed that we use a naming convention for tables, views, and the like that
attaches a prefix (such as tbl or qry) to the names of objects in the database. We do so for two

reasons: (1) it makes it easy for you to figure out what kind of object you're dealing with in
situations where that may not be cleartables and views, for example, can behave nearly
identically in many cases; and (2) we used this convention in previous editions of this book and
wanted to stay consistent with those earlier editions.
Our convention will be familiar to Microsoft Access programmers in particular. Although we're
doing things a little differently than SQL Server programmers might be accustomed to, we figured
that adhering to some naming convention was better than not having one at all. Of course, in your
work, you're welcome to name things however you want.

In the next few sections, we use the View Designer to write queries that retrieve records from the database.

[ Team LiB ]

[ Team LiB ]

Retrieving Records with the SELECT Clause


The SELECT clause is at the core of every query that retrieves data. It tells the database engine which fields
to return. A common form of the SELECT clause is

SELECT *

This clause means "return all the fields you find in the specified record source." This form of the command is
handy because you don't need to know the names of fields to retrieve them from a table. Retrieving all the
columns in a table can be inefficient, however, particularly when you need only two columns and your query
retrieves two dozen.
So, in addition to telling the database engine to return all the fields in the record source, you also have the
ability to specify exactly which fields you want to retrieve. This limiting effect can improve the efficiency of a
query, particularly in large tables with many fields, because you're retrieving only the fields you need.
A SELECT clause that retrieves only the contents of the first and last names stored in a table looks like this:

SELECT FirstName, LastName

Note also that a SELECT clause isn't complete without a FROM clause (so the SELECT clauses shown in this
section can't stand on their own). For more about the SELECT clause, see examples for the FROM clause in
the next section.

[ Team LiB ]

[ Team LiB ]

Designating a Record Source with the FROM Clause


The FROM clause denotes the record source from which your query is to retrieve records; this record source
can be either a table or another stored query. You also have the ability to retrieve records from more than
one table; see the Joining Related Tables in a Query section later in this chapter for more information on
how that works.
The FROM clauses work with SELECT clauses. For example, to retrieve all the records in tblCustomer, use the
SQL statement

SELECT *
FROM tblCustomer

This query retrieves all the records and all the fields in tblCustomer (in no particular order).
To retrieve only the customers' first and last names, use the SQL statement

SELECT FirstName, LastName


FROM tblCustomer

Once you've made the change in the View Designer, use the menu command Query Run to refresh the data
output. This command produces the result set shown in Figure 2.3.
Figure 2.3. Query results retrieved by running a SELECT against the FirstName and LastName
fields of the tblCustomer table

For reasons of efficiency, always use this technique to limit the number of fields in a SELECT clause to only
those fields you know your application will need. Note that records returned by a SELECT FROM are returned
in no particular order. Unless you specify a sorting order (using the ORDER BY clause discussed later in this
chapter), the order in which records is returned is always undefined.

[ Team LiB ]

[ Team LiB ]

Specifying Criteria with the WHERE Clause


A WHERE clause tells the database engine to limit the records it retrieves according to one or more criteria
that you supply. A criterion is an expression that evaluates to a true or false condition; many of the same
expressions of equivalence to which you're accustomed in Visual Basic (such as >0 and = 'Smith') exist in
SQL as well.
For example, say that you want to return a list of only those customers who live in California. You might
write an SQL query such as

SELECT FirstName, LastName, State


FROM tblCustomer
WHERE State = 'CA'

This query retrieves the record for the customer who lives in California, Daisy Klein.
Note also that the delimiter for a text string in a WHERE clause is a single quotation mark. This marker is
convenient, as you'll see later, because the delimiter for a string in VB.NET is a double quotation mark, and
SQL statements must sometimes be embedded in VB code.
You can create more sophisticated WHERE clauses by linking two or more criteria with AND and OR logic. For
example, say that you want to retrieve all the customers who live in Denver, Colorado (as opposed to those
customers who live in other cities in Colorado). To do so, you need to denote two criteria linked with an AND
operator:

SELECT FirstName, LastName, City, State


FROM dbo.tblCustomer
WHERE (State = 'CO') AND (City = 'Denver')

Hypothetically, running this query should retrieve Thurston Ryan, the customer who lives in Denver,
Colorado. If you had more than one customer in Denver, Colorado, they'd all be retrieved by this query.
However, it wouldn't retrieve any customers who live in a city named Denver in some state other than
Colorado (assuming that such a place actually exists).
If you're interested in seeing information on people who live in two statesfor example, both Colorado and
Californiause an OR clause to link the two criteria, as in

SELECT FirstName, LastName, City, State


FROM
tblCustomer
WHERE State='CO' OR State='CA'

Running this query retrieves the three records from tblCustomer who live in California or Colorado. As these
examples clearly show, you can go nuts trying to link WHERE criteria with AND and OR conditions to extract

data from a table.

Note
One key to successful database development is to keep client applications from retrieving too
many records at once. Doing so will ensure that your applications run quickly and won't do bad
things such as causing users' computers to run out of memory. One of the most basic weapons
that you can use to avoid these unfortunate results is the WHERE clause.

Operators in WHERE Clauses


You can use the operators listed in Table 2.1 to construct a WHERE clause. The equality and in equality
operators work exactly the same way in SQL as they do in VB.NET.

Table 2.1. Operators for Use in WHERE Clauses


OperatorS

Function

<

Less than
Less than or equal to

<=
>

Greater than
Greater than or equal to

>=
=

Equal to
Not equal to

<>
BETWEEN

Within a range of values

LIKE

Matching a pattern
Contained in a list of values

IN
The BETWEEN Operator
The BETWEEN operator returns all a record's values between the limits you specify. For example, to return all
the orders placed between January 4 and June 5, 2001, you would write the SQL statement

SELECT *
FROM tblOrder
WHERE OrderDate BETWEEN '1/4/2001' and '6/5/2001'

This query produces the result set shown in Figure 2.4.


Figure 2.4. Query results obtained by running a SELECT against tblOrder, using the BETWEEN
operator

Note that, as with strings, date parameters in SQL Server are delimited with single quotes. If you're
accustomed to delimiting dates with pound signs (#), as in Microsoft Access, you'll have to adjust when
using dates as parameters in SQL Server.
Note also that the boundaries of a BETWEEN operator are inclusive. That is, if you ask for all the orders
placed between January 4 and June 5, as you're doing here, the result set will also include records placed on
January 4 and June 5.

The LIKE Operator and Wildcard Characters


The LIKE operator matches records to a pattern you specify. This pattern is often a wildcard character, such
as the (*) or (?) character with which you may be familiar from working with the MS-DOS or Windows file
systems.
The percent (%) character indicates a partial match. For example, to retrieve all the records in tblCustomer
whose last names begin with the letter J, you'd use a query such as

SELECT ID, FirstName, LastName, Address, City, State


FROM tblCustomer
WHERE FirstName LIKE 'J%'

This query retrieves the three people in the customer table whose first names begin with the letter J.
You can also create wildcard matches by using the underscore character. It takes the place of a single
character in a pattern. For example, to locate all the customers with five-digit zip codes beginning with the
number 80, use the expression LIKE 80 __, with three underscores to represent the three "wild" characters,
as in

SELECT ID, FirstName, LastName, Address, PostalCode


FROM tblCustomer
WHERE PostalCode LIKE '80__'

This query retrieves the two customers in the database who have postal codes beginning with 80.
You can also use a LIKE operator that returns a range of alphabetic or numeric values. For example, to
return a list of customers whose last names begin with the letters A through M, use the SQL statement

SELECT ID, FirstName, LastName


FROM tblCustomer
WHERE LastName LIKE '[A-M]%'

This query returns the five customers in the database whose last names begin with the letters A through M.

Note
If you're coming to SQL Server from Microsoft Access, you should know that the wildcard
characters in Access SQL are different from the wildcards in standard SQL. In Access, you use an
asterisk instead of a percent sign to match any number of characters, and you use a question
mark instead of an underscore to match any single character. In standard SQL, you use a percent
to match any number of characters, and you use an underscore to match any single character.

The IN Operator
The IN operator retrieves records that match a list of values. For example, to retrieve all the customers in
either Colorado or Wisconsin, use

SELECT FirstName, LastName, State


FROM tblCustomer
WHERE State IN ('CO', 'WI')

This query retrieves the three customers who live either in Wisconsin or Colorado. Thus you can get the
same results with IN as you do with OR . Some developers prefer to use IN when applying multiple criteria
because it makes for a somewhat tidier SQL statement.

[ Team LiB ]

[ Team LiB ]

Sorting Results with ORDER BY


The ORDER BY clause tells the database engine to sort the records it retrieves. You can sort on any field, or
on multiple fields, and you can sort in ascending or descending order. To specify a sort order, include the
ORDER BY clause at the end of any SELECT query, followed by the name of the field or fields by which you
want to sort. For example, to return a list of customer's names sorted by last name, use

SELECT ID, FirstName, LastName


FROM tblCustomer
ORDER BY LastName

This query retrieves all customers from the database, arranging them by last name.

Sorting in Descending Order


To sort in descending order, use the DESC keyword after the field by which you're sorting. For example, to
retrieve records from the tblOrder table according to who placed the most recent order, use

SELECT *
FROM tblOrder
ORDER BY OrderDate DESC

This query retrieves all orders from tblOrder, arranged with the newest order first.

Sorting on Multiple Fields


To sort on multiple fields, list the fields one after the other immediately following the ORDER BY clause,
delimited by commas. For example, to sort tblCustomer by last name, then by first name, use the SQL query

SELECT FirstName, LastName, City, State


FROM tblCustomer
ORDER BY LastName, FirstName

This query retrieves all customers from the database. Unlike our earlier customer query, the two customers
whose last names are identical (Betty Klein and Daisy Klein) are sorted correctly this time.

[ Team LiB ]

[ Team LiB ]

Displaying the Top or Bottom of a Range with TOP


The TOP keyword displays only the top or bottom few records in a large record set. In queries, TOP is
combined with a SORT clause to limit the number of records to a set number of records or a percentage of
records in the result set.
For example, say that you want to view the three most recent outstanding orders in tblOrder. To do so, start
by writing a SQL statement such as

SELECT ID, OrderDate, CustomerID


FROM tblOrder
ORDER BY OrderDate DESC

The DESC keyword causes the result set to be sorted in descending (biggest to smallest) order. This query
retrieves all the orders in tblOrder by customer, with the most recent order first and the earliest order last.
This result is fine, except that in a database that stores every order received, you might have to sort
thousands of records when all you're really interested in are the last three outstanding orders. So instead,
try the SQL statement

SELECT TOP 3 *
FROM tblOrder
ORDER BY OrderAmount DESC

This query retrieves the three records in tblOrder with the most recent order dates.
Note that, although you asked for three records, you're not guaranteed that only three records will be
returned in this query. With a TOP N query, none, one, or two records may be returned if your table has only
that many records. And if two or more records are tied for last place in your result list, four or more records
may be returned.
There is no such thing as "BOTTOM N" in SQL syntax, but you can return the last few records in a tablein
this case, the most recent orders in your system. To create such a query, simply order the records by most
recent date:

SELECT TOP 3 *
FROM tblOrder
ORDER BY OrderDate

This query retrieves three records representing the three most recent orders in your database.
Sorting data in ascending order is implicit in SQL; there's no need to use the ASC keyword (to denote
ascending sort order) unless you really want to.

Creating Top Percentage Queries


You can write queries that return a percentage of records in a table. For example, if you have a table with
1,000 records and you want to return the top 1 percent of records, 10 records will usually be displayed.
Remember, though, that TOP N queries can be trickymore than 10 records may be displayed in a top
percentage query if more than one record stores the same value.
To return the top records in a result set according to their percentage of the total records in your table, use
the TOP N PERCENT clause. For example, to return the top 20 percent of outstanding orders in the tblOrder
table, use

SELECT TOP 20 PERCENT *


FROM tblOrder

This query retrieves the two most recent orders, which is about what you'd expect from a table containing
ten rows.

[ Team LiB ]

[ Team LiB ]

Joining Related Tables in a Query


A join retrieves related information from more than one table. To create a join in a query, you must
designate the primary and foreign keys of the tables involved in the join. (We introduced these concepts in
Chapter 1.) For example, consider two related tables, tblCustomer and tblOrder, having the fields shown.
TblCustomer
ID
FirstName
LastName
Address
City
State
PostalCode
Phone
Fax
E-mail
tblOrder
ID
CustomerID
OrderDate
Though tblOrder stores information about orders and tblCustomer stores information about customers, you'll
likely want to retrieve a record set showing information about when each customer placed an order,
producing output such as the following.
FirstName

LastName

OrderDate

Jane

Winters

9/10/2001

Jane

Winters

8/16/2001

Thurston

Ryan

7/2/2001

Dave

Martin

6/5/2001

Daisy

Klein

4/4/2001

Even though the data is stored in separate tables, retrieving a result set like this one is easy to do with a
join. So long as your data design has specified that the primary key in tblCustomer (ID) is related to the
foreign key (CustomerID) in tblOrder, the correct data will be returned.

Note

In this joined record set, one of the customers is displayed more than once, even though her
name was entered in the database only once. This result reflects the fact that she has placed
multiple orders. It's a nice feature because you never have to enter the same customer's data in
the database twice, but it sometimes means that you get more information back in a query than
you want. A variety of tactics may be used for handling this situation, which we discuss later in
this chapter.

Expressing a Join in SQL


In SQL Server, you set up a join as an expression of equivalence between two fields, as in

SELECT FirstName, LastName, OrderDate


FROM tblOrder INNER JOIN tblCustomer
ON tblOrder.CustomerID = tblCustomer.ID

This SQL returns information on all the customers who have related orders in tblOrder. It returns three
columns of datathe FirstName and LastName fields from tblCustomer and the OrderDate field from
tblOrder.
Note that, in a query that includes a join, when the same field appears in two tables, you must include a
reference to the base table along with a field name (such as tblOrder.ID rather than simply ID) to denote
which table you're talking about. Fortunately, in most cases when you're using the View Designer in VS.NET
to create your query, the development environment figures out what you want to do and fills in the missing
parts for you automatically. As you've seen already, the examples presented in this book generally include
the most concise possible SQL syntax, except where more specificity is required.

Using the View Designer to Create Joins


Because creating joins can be the most complicated part of queriesparticularly when more than two tables
are involvedyou might find it useful to have some help when creating them. Fortunately, you can use
VS.NET's View Designer to create a query comprising a join between multiple tables. Using the designer
means that you don't have to memorize complicated SQL join syntaxinstead, you can create the join
graphically, as follows.

1. In Server Explorer, create a new view in the Novelty database.


2. The Add Table dialog appears. Add tblCustomer and tblOrder to your view and then click on Close. The
diagram pane of the View Designer window should look like that shown in Figure 2.5.
Figure 2.5. Creating a join between two tables in the View Designer window

Note that the View Designer automatically creates a join between the two tables. The View Designer knows
that the primary key field named ID in tblCustomer is related to the CustomerID field in the tblOrder
because the relationship between the two tables was explicitly defined when the database was created.
Running the query returns data based on the relationship between customers and orders, as shown in Figure
2.6.
Figure 2.6. A joined query in the View Designer window after it has returned data

Using Outer Joins to Return Additional Data


A conventional join returns records from two tables in which a value in one table's primary key matches a
value in a related table's foreign key. But suppose that you want to return all the records on one side of a
join whether or not there are related records? In this case, you must use an outer join. For example, the
following query lists customers and orders, including customers who do not have any orders outstanding:

SELECT FirstName, LastName, OrderDate


FROM tblCustomer LEFT OUTER JOIN
tblOrder ON tblCustomer.ID = tblOrder.CustomerID

Note the tablename.fieldname syntax used in the LEFT JOIN clause. This long name is used to avoid

ambiguity because the ID field exists in both tblCustomer and tblOrder. Because it's a LEFT JOIN, the table
on the left side of the expression tblCustomer.ID = tblOrder.CustomerID is the one that will display all its
data. This query returns the following result set.
FirstName

LastName

OrderDate

John

Smith

1/4/2001

John

Smith

1/9/2001

Jill

Azalia

1/14/2001

Brad

Jones

<NULL>

Daisy

Klein

2/18/2001

Daisy

Klein

3/21/2001

Daisy

Klein

4/4/2001

Dave

Martin

6/5/2001

Betty

Klein

<NULL>

Thurston

Ryan

7/2/2001

Jane

Winters

8/16/2001

Jane

Winters

9/10/2001

This result set comprises all the customers in the database whether or not they have outstanding orders. For
those customers without orders, <NULL> appears in the OrderDate field. Null is a special state indicating the
absence of data.
There also are right joins. The difference between a left join and a right join simply has to do with which
table is named first in the join. (Both left joins and right joins are types of outer joins and both can return
identical result sets.)

[ Team LiB ]

[ Team LiB ]

Performing Calculations in Queries


You can perform calculations on fields in a query. To do so, simply replace the name of a field in a SELECT
clause with the name of an arithmetic expression. For example, say that you want to create a query to
calculate the sales tax on each item in your inventory (as stored in the table tblItem). The following SQL
query calculates a 7.5 percent sales tax for each piece of merchandise in tblItem:

SELECT ID, Name, Price, Price * 0.075


AS SalesTax
FROM dbo.tblItem

This query produces the following result.


ID Name

Price SalesTax

Rubber Chicken

5.99 0.44925

Hand Buzzer

1.39 0.10425

Stink Bomb

1.29 0.09675

Disappearing Penny Magic Trick

3.99 0.29925

Invisible Ink

2.29 0.17175

Loaded Dice

3.49 0.26175

Whoopee Cushion

5.99 0.44925

Because you're dealing with money here, you may need to round the result to two digits to the right of the
decimal. Fortunately, SQL Server has a ROUND function that enables you to do so easily. The most commonly
used form of ROUND takes two parameters, a decimal value and an integer that specifies how many digits to
the right of the decimal you want. The query

SELECT Name, Retail Price, ROUND (Retail Price + Retail Price * 0.075, 2)
AS PriceWithTax
FROM dbo.tblInventory

produces the following result.

Name

Retail Price

PriceWithTax

Rubber Chicken

5.99

6.44

Hand Buzzer

1.39

1.49

Stink Bomb

1.29

1.39

Disappearing Penny Magic Trick

3.99

4.29

Invisible Ink

2.29

2.46

Loaded Dice

3.49

3.75

Whoopee Cushion

5.99

6.44

[ Team LiB ]

[ Team LiB ]

Aliasing Field Names with AS


SQL gives you the ability to alias, or rename, a field or expression in a query. (We used this capability in
both examples in the preceding section.) You typically need to alias a field for two reasons:

1. The underlying table has field names that are unwieldy, and you want to make the field names in the
result set easier to deal with
2. The query that you're creating produces some sort of calculated or aggregated column that requires a
name
Whatever your reason for wanting to alias a field name, it's easy to do with the AS clause in SQL. For
example, say that you're doing a complex series of calculations to determine the extended price on invoices
(the extended price is the item price multiplied by the quantity shipped). You also want to refer to the
calculated column as ExtendedPrice. You can do so by writing the SQL code

SELECT TOP 5 ItemID, Quantity, Price,


tblInventory Retail Price * tblOrderItem.Quantity AS ExtendedPrice
FROM tblOrderItem INNER JOIN
tblInventory ON tblOrderItem.ItemID = tblItem.ID

This query produces the following result set.


ItemID

Quantity

Retail Price

ExtendedPrice

5.99

5.99

1.39

2.78

2.29

6.87

3.99

7.98

5.99

5.99

The entries in the ExtendedPrice field aren't stored in the database; they're calculated on the fly.

[ Team LiB ]

[ Team LiB ]

Queries That Group and Summarize Data


Frequently, you'll need to create queries that answer questions such as: How many orders came in
yesterday? In this case, you don't care who ordered items; you only want to know how many orders came
in. You can find out by using group queries and aggregate functions.
Aggregate queries summarize data according to one or more fields in common. For example, if you wanted
to see how many orders have been placed by each customer, you'd perform a query on tblOrder grouping on
the CustomerID field. The following is an example of such a query:

SELECT CustomerID, COUNT(CustomerID) AS TotalOrders


FROM tblOrder
GROUP BY CustomerID

A similar result set is produced by this query.


CustomerID

TotalOrders

Note the use of the AS clause in the SQL expression. This clause is used to give the column containing the
result of the aggregate function a name because it's calculated rather than stored in the database.
To display customer names instead of IDs, simply join data from tblCustomer, as in

SELECT tblOrder.CustomerID, FirstName, LastName,


COUNT(dbo.tblOrder.CustomerID) AS TotalOrders
FROM tblOrder INNER JOIN tblCustomer
ON tblOrder.CustomerID = tblCustomer.ID
GROUP BY FirstName, LastName, CustomerID

A similar result set is produced by this query.

CustomerID

FirstName

LastName

TotalOrders

John

Smith

Jill

Azalia

Daisy

Klein

Dave

Martin

Thurston

Ryan

Jane

Winters

In this case, the GROUP BY clause contains the CustomerID along with the FirstName and LastName fields
joined from tblCustomer. When you use GROUP BY, you must include all the fields you're grouping onin
this case, the customer ID and name fields are all involved in the grouping, so they must all appear in the
GROUP BY clause. (Fortunately, if you forget to do that, the VS.NET development environment gently nudges
you in the right direction.)

Using HAVING to Provide Criteria for Grouped Queries


We've already shown that a query criterion serves to limit the number of records retrieved by a query. In
conventional SELECT queries, you use the WHERE clause to supply query criteria. However, in grouped
queries, you use the HAVING clause instead. WHERE and HAVING are used in much the same way, although
the criterion supplied by a HAVING clause applies to aggregated rows (that is, the product of the grouping),
whereas the WHERE clause applies a criterion to individual rows. It may sound like a hair-splitting
distinctionby and large it isbecause nine times out of ten the two work nearly the same way. For
example, you can use a criterion with grouping to return a report of sales activity for customer "Jane", as in
the query

SELECT tblOrder.CustomerID, FirstName, LastName,


COUNT(dbo.tblOrder.CustomerID) AS TotalOrders
FROM tblOrder INNER JOIN tblCustomer
ON tblOrder.CustomerID = tblCustomer.ID
GROUP BY FirstName, LastName, CustomerID
HAVING FirstName = 'Jane'

This query returns a single record, indicating that Jane Winters has placed two orders with the company.
Now say that you want to display a list of frequent shopperscustomers who have placed more than one
order with your company. Because the aggregate number of orders is stored in the calculated field
TotalOrders, you might think that you could use an expression such as HAVING TotalOrders> 1 to retrieve all
your frequent customers. But unfortunately, this expression won't work because TotalOrders isn't a real field
in the databaseit's a calculated field. Instead, you have to include the calculation in the HAVING clause,
using a query such as

SELECT tblOrder.CustomerID, FirstName, LastName,


COUNT(dbo.tblOrder.CustomerID) AS TotalOrders
FROM tblOrder INNER JOIN tblCustomer
ON tblOrder.CustomerID = tblCustomer.ID
GROUP BY FirstName, LastName, CustomerID
HAVING (COUNT(tblOrder.CustomerID) > 1)

The following result set is returned by this query.


CustomerID

FirstName

LastName

TotalOrders

John

Smith

Daisy

Klein

Jane

Winters

This query returns three rows, each representing a customer who has placed more than one order.

The SUM Function


You're not limited simply to counting records in aggregate functions. Using the SUM function, you can
generate totals for all the records returned in numeric fields. For example, say that you want to create a
query that lists the total number of items ordered by each customer. To do so, use

SELECT OrderID, SUM(Quantity) AS TotalItems


FROM tblOrderItem
GROUP BY OrderID

This query produces the following result set.


OrderID

TotalItems

23

13

12

10

As with the previous examples that involve grouping, if you want to retrieve additional related information
(such as the customer's first and last name), simply use a join. Remember, you must group on at least one
field for an aggregate function to work.

Other SQL Aggregate Functions


Table 2.2 lists the aggregate functions available to you in SQL.

Table 2.2. SQL Aggregate Functions


Function Result

AVG

The average of all values in the column

COUNT

The number of records returned

MAX

The maximum (or largest) value in a column

MIN

The minimum (or smallest) value in a column

STDEV

The standard deviation

SUM

The total of all values in the column

VAR

The statistical variance

The syntax of these aggregate functions is essentially the same as the syntax for COUNT and SUM, described
in previous sections. For example, say that you want to get a sense of the average line-item quantity in each
purchase, which is an aggregate calculation of the average number of items each customer purchases. To do
so, use

SELECT AVG(tblOrderItem.Quantity) AS AverageLineItemQuantity


FROM tblOrder INNER JOIN
tblOrderItem ON tblOrder.ID = tblOrderItem.OrderID

This query retrieves a single value, the number 2, indicating that when customers buy items from you, they
buy them two at a time, on average.
You can combine calculations and aggregate functions in a variety of interesting ways. For example, say that
you want a list of the total cost of all the orders in your database. You calculate the total cost of an order by
multiplying the quantity (found in tblOrderItem) times the price (found in tblInventory) and then performing
a SUM aggregate on that result. The query giving you the result you need is

SELECT tblOrderItem.OrderID, SUM(Quantity * Price)


AS OrderTotal
FROM tblInventory INNER JOIN
tblOrderItem ON tblItem.ID = tblOrderItem.ItemID
GROUP BY OrderID

This query produces the following result set.

OrderID

OrderTotal

15.64

7.98

5.99

99.17

13.96

49.07

55.88

13.97

9.16

10

14.76

[ Team LiB ]

[ Team LiB ]

Union Queries
A union query merges the contents of two tables that have similar field structures. It's useful in situations in
which you need to display potentially unrelated records from multiple record sources in a single result set.
Later in this chapter, we describe a way to store old orders in a table of their own, called tblOrderArchive.
Because of the way this archiving system is set up, the records are physically located in two separate tables.
This approach might be useful for efficiency, as it's usually faster to query a small table than a large one. But
at some point you may want to view all the current records and the archived records in a single, unified
result set. A union query lets you do so.
Suppose that you need to view the old records in tblOrderArchive in the same result set as the new records
in tblOrder. The union query you write to accomplish that is

SELECT *
FROM tblOrder
UNION
SELECT *
FROM tblOrderArchive

The result set of this query combines old and new orders in a single result set. The output looks exactly like
the original table before it was archived.
By default, union queries don't return duplicate records (that is, records with the exact same field contents
from each of the two tables). Displaying duplicate records might be useful if your record archiving system
didn't delete records after it copied them to the archive table and you wanted to display some sort of beforeand-after comparison.
You can force a union query to intentionally display duplicate records by adding the ALL keyword, however,
as in

SELECT *
FROM tblOrder
UNION ALL
SELECT *
FROM tblOrderArchive

[ Team LiB ]

[ Team LiB ]

Subqueries
A subquery is a query whose result supplies a criterion value for another query. Subqueries take the place of
normal WHERE expressions. Because the result generated by the subquery takes the place of an expression,
the subquery can return only a single value (as opposed to a conventional query, which returns multiple
values in the form of rows and columns).
The only syntactical difference between a subquery and any other type of expression placed in a WHERE
clause is that the subquery must be enclosed in parentheses. For example, say that you want to create a
query that shows your most expensive items. You define an expensive item as an item whose price is above
the average price of all items in tblItem. Because the value of a larger-than-average order can be
determined (by performing an aggregate average on the UnitPrice field in tblItem), you can use this value as
a subquery criterion value in the larger query, as follows:

SELECT Name, UnitCost


FROM tblItem
WHERE (UnitCost >
(SELECT AVG(UnitCost) FROM tblItem))

In this case, the query and the subquery happen to be querying the same table, but that doesn't have to be
the case. Subqueries can query any table in the database so long as they return a single value.
The preceding SQL statement returns the following result set.
Name

UnitCost

Rubber Chicken

2.03

Disappearing Penny Magic Trick

2.04

Loaded Dice

1.46

Whoopee Cushion

2.03

[ Team LiB ]

[ Team LiB ]

Manipulating Data with SQL


A data manipulation command is an SQL statement that can alter records. Such commands are written by
using a subset of SQL grammar called Data Manipulation Language. Its commands don't return records;
instead, they make permanent changes to data in a database.
You generally use SQL DML commands when you need to make changes to large amounts of data based on a
criterion. For example, if you need to initiate a 10 percent across-the-board price increase in your products,
you'd use an update query (one type of DML command) to change the prices of all the items in your
inventory.

Note
The SQL examples in this section make permanent changes to data in your Novelty database. If
you hopelessly mangle the data and want to return the data to the way it was initially, you can
always reinstall it by running the Novelty script described in the Preface.

Visual Studio.NET provides a capable interface for executing DML commands. In fact, the tools provided in
VS.NET can actually provide you with helpful information (such as the correct connection string to use to
connect to a database) or by retrieving data from a table and changing the query type; it will generate the
basic DML for you in a designer window.
There are two tools, from a low-level perspective (that is, not much on GUI), that you can use to issue SQL
DML commands to SQL Server:

SQL Query Analyzer, a GUI tool for issuing queries and commands to SQL Server
The command-line query processor called osql
You can use whichever tool you feel most comfortable with; in this chapter we use SQL Query Analyzer
because it's easier to use and more feature-rich than osql. And, in this chapter, our focus is on the actual
commands rather than how to use a specific GUI. You can find SQL Query Analyzer in the SQL Server
program group. (In Chapter 7 we discuss use of the database manipulation features of VS.Net in more
detail.)

Update Commands
An update command has the capability to alter a group of records all at once. An update command has three
parts:

1. The UPDATE clause, which specifies which table to update


2.
3.

1.
2. The SET clause, which specifies which data to change
3. Optionally, the WHERE criteria, which limits the number of records affected by the update query
For example, to increase the price of all the items in your inventory, you'd use the update command:

UPDATE tblItem
SET Price = Price * 1.1
SELECT * FROM tblItem

The SELECT statement that follows the UPDATE isn't necessary to perform the update, of courseit's just a
way for you to see the results of the UPDATE once it's occurred.
The contents of tblItem after you run the update query are as follows.
ID Name

Description

UnitCost Price

Rubber Chicken

A classic laugh getter

Hand Buzzer

Shock your friends

.8600 1.5290

Stink Bomb

Perfect for ending boring meetings

.3400 1.4190

Invisible Ink

Write down your most intimate thoughts

1.4500 2.5190

Loaded Dice

Not for gambling purposes

1.4600 3.8390

Whoopee Cushion

The ultimate family gag

2.0300 6.5890

2.0300 6.5890

To limit the number of records affected by the update command, simply append a WHERE clause to the
command. For example, to apply the price increase only to big-ticket itemssay, more than $100you'd
alter the SQL as follows:

UPDATE tblInventory
SET Price = Price * 1.1
WHERE Retail Price > 100

This command increases the retail price of items currently priced at more than $100 by 10 percent.

Delete Commands
A delete command can delete one or more records in a table. For example, to delete all the orders placed
before (but not on) last Halloween, you'd use the SQL statement

DELETE *
FROM tblOrder
WHERE OrderDate < '10/31/98'

Insert Commands

An insert command is used for two purposes:

1. Adding a single record to a table


2. Copying one or more records from one table to another
To create an append query, use the SQL INSERT clause. The exact syntax of the query depends on whether
you're inserting a single record or copying multiple records. For example, a single-record append query that
adds a new order to tblOrder might look like this:

INSERT INTO tblOrder (CustomerID, OrderDate)


VALUES (119, '6/16/2001')

Executing this query creates a new order for Customer 119, dated June 16, 2001, in tblOrder.

Note
In this update command, you don't append anything for tblOrder's ID field because it is an identity
column. Attempting to do so would generate an error. In general, only the database engine itself
can alter the contents of an identity column.

To create the kind of insert command that copies records from one table to another, use an INSERT clause
in conjunction with a SELECT clause. For example, say that, instead of deleting old orders, you want to
archive them by periodically copying them to an archive table called tblOrderArchive, which has the same
structure as the tblOrder table. For that to work, you'll first need to create tblOrderArchive with an SQL
command:

CREATE TABLE tblOrderArchive (


ID [int] NOT NULL ,
CustomerID [int] NULL ,
OrderDate [datetime] NULL

Note
SQL commands that create and otherwise manipulate the structure of a database are called SQL
Data Manipulation Language commands. We cover SQL DML later in this chapter.

An SQL statement to copy old records from the tblOrder table to the tblOrderArchive table might look like
this:

INSERT INTO tblOrderArchive


SELECT * FROM tblOrder
WHERE OrderDate < #6/1/2001#

Executing this statement will copy all the records with order dates before June 1, 2001, into the
tblOrderArchive table.

Creating Tables with SELECT INTO


A SELECT INTO query is similar to an append query, except that it can create a new table and copy records
to it in one fell swoop. (If you're coming from the Microsoft Access universe, this method is known as a
make-table query.) For example, in the preceding demonstration you copied records from tblOrder to
tblOrderArchive, presuming that tblOrderArchive actually exists. Instead, to copy the same records into a
new table with the same structure as the original, you could use SELECT INTO, as in

SELECT * INTO tblOrderArchive


FROM tblOrder

Note
Executing this query copies all the records from tblOrder into a new table, tblOrder Archive. If
tblOrderArchive already exists when the query is run, this command won't work. This be havior is
different from the make-table query functionality provided by Microsoft Access; in Access, the
existing table is deleted and replaced by the database engine with the contents of the copied
records. To wipe out a table in SQL Server, you first need to use the DROP TABLE command-an
SQL DDL command.

With SELECT INTO, you can apply selection criteria (using a WHERE clause) in the same way you apply
criteria to an append query, as illustrated in the earlier section on append queries. Doing so enables you to
copy a subset of records from the original table into the new table you create with a make-table query.

[ Team LiB ]

[ Team LiB ]

Using Data Definition Language


Data Definition Language commands are SQL statements that enable you to create, manipulate, and destroy
elements of the database structure. Using DDL, you can create and destroy tables and alter the definition of
tables.
Data Definition Language commands are perhaps the least used statements in SQL, mainly because there
are so many good tools that help you perform chores, such as creating tables, fields, and indexes. Visual
Studio.NET issues SQL DDL commands behind the scenes when you design a database structure in Server
Explorer, but it doesn't provide a facility to issue SQL DDL commands directly to the database. To do that,
you must use SQL Query Analyzer and the osql command-line tool, or issue the DDL command in code.
If you're coming from a client-server programming environment, you might be more comfortable with using
DDL to create the structure of your database. Like data manipulation commands, DDL commands don't
return result sets (which is why they're referred to as "commands" rather than "queries").

Creating Database Elements with CREATE


New database elements can be created by using the SQL CREATE clause. To create a table, use the CREATE
TABLE command, followed by the fields and data types you want to include in the table, delimited by
commas and enclosed in parentheses. For example, to create a new table, you can use this SQL statement:

CREATE TABLE tblRegion


(State char (2) ,
Region varchar (50))

The data type char(2) tells the database engine to create a fixed text field that can store a maximum of two
characters; varchar(50) creates a variable-length field 50 characters long.
This query creates a table with the following parts:

tblRegion
State
Region

For a complete list of data types you can use when creating fields, see the Data Types section in Chapter 1.

Adding Constraints to Tables


You can add constraints when creating a table. A constraint is similar to an index, but it's used to designate a
unique key, a primary key, or a foreign key.

You create a constraint by using the SQL CONSTRAINT clause. It takes two parameters: the name of the
index and the name of the field or fields you're interested in indexing. You can declare the index to be
UNIQUE or PRIMARY , in which case the index designates that the field can only accept unique values or that
a field or fields serves the table's primary key.

Note
The concept of indexes having names might seem a little strange if you're accustomed to
Microsoft Access; the reason is that Access buries the names of indexes in its user interface. You
can get access to the name of an index programmatically, however.

For example, as an enhancement to the tblRegion table created in the preceding demonstration, you might
add a unique index to the State field because it is used in joins. The query

CREATE TABLE tblRegion


(State char (2),
Region varchar (50),
CONSTRAINT StateIndex UNIQUE (State))

creates the table with a unique index called StateIndex on the State field.
Although this code fragment indexes the State field, it might make more sense to make the State field the
table's primary key. Doing so will index the field, ensure that no values are duplicated in the State field, and
ensure that no null values appear in the State field. The following SQL creates the tblRegion table with the
State field as its primary key:

CREATE TABLE tblRegion


(State char (2),
Region varchar (50),
CONSTRAINT StatePrimary PRIMARY KEY (State))

Designating Foreign Keys


To designate a field as a foreign key, you can use the FOREIGN KEY constraint. For example, suppose that in
your database design there is a one-to-many relationship between the State field in tblRegion and a
corresponding State field in tblCustomer. The code, then, that you'd use to create tblCustomer might look
like this:

CREATE TABLE tblCustomer


(ID int identity(1, 1),
[FirstName] varchar (20),
[LastName] varchar (30),
[Address] varchar (100),

[City] varchar (75),


[State] varchar (2),
CONSTRAINT IDPrimary PRIMARY KEY ([ID]),
CONSTRAINT StateForeign FOREIGN KEY ([State])
REFERENCES tblRegion ([State]))

Note that designating a foreign key in a CREATE TABLE command doesn't create an index on that foreign
key; it serves only to create a relationship between the two tables.

Creating Indexes with CREATE INDEX


In addition to creating indexes when you create your table (using the CONSTRAINT clause), you can also
create indexes after you've created the table (using the CREATE INDEX clause). This approach is useful
when you want to create an index on a table that already exists (as opposed to the CONSTRAINT clause,
which lets you create indexes only on tables when you create the table).
To create an index on an existing table, use

CREATE INDEX StateIndex


ON tblCustomer ([State])

To create a unique index, use the UNIQUE keyword, as in

CREATE UNIQUE INDEX StateIndex


ON tblRegion ([State])

To create a primary key on an existing table, use

CREATE UNIQUE NONCLUSTERED INDEX StateIndex ON dbo.tblRegion


(
State
) ON [PRIMARY]

Deleting Tables and Indexes with DROP


You can delete database elements by using the DROP clause. For example, to delete tblRegion, use

DROP TABLE tblRegion

You can also drop an index in a table by using the DROP clause:

USE Novelty
IF EXISTS (SELECT name FROM sysindexes
WHERE name = 'StateIndex')
DROP INDEX tblRegion.StateIndex
GO

Note that, to delete a primary key, you must know the primary key's name.
To drop individual fields from tables, use a DROP clause within an ALTER TABLE clause, as discussed in the
next section. Finally, to delete an entire database, use the DROP DATABASE clause.

Modifying a Table's Definition with ALTER


You can modify the definition of a field in a table by using the ALTER clause. For example, to add a
CustomerType field to tblCustomer, use

ALTER TABLE tblCustomer


ADD CustomerType int

To remove a field from a database, use the DROP COLUMN clause along with an ALTER TABLE clause:

ALTER TABLE tblCustomer


DROP COLUMN CustomerType

You can also add constraints to a table by using the ALTER TABLE clause. For example, to create a
relationship between tblCustomer and tblOrder with ALTER TABLE, use

ALTER TABLE tblOrder


ADD CONSTRAINT OrderForeignKey
FOREIGN KEY ([CustomerID])
REFERENCES tblCustomer ([ID])

Again, remember that adding a constraint doesn't create a conventional index on a field; it just makes a field
unique, designates a field as a primary key, or creates a relationship between two tables.

[ Team LiB ]

[ Team LiB ]

Summary
In this chapter we covered the query technologies available to you in a VB.NET database access application.
They include queries that return records and queries that create and change database structures.
Much of what we covered in this chapter doesn't stand on its ownit will make much more sense when you
start doing application programming with SQL Server and ADO.NET.

Questions and Answers

Q1:

What's with the square brackets?

A1:

Square brackets are often inserted around object names by VS.NET and SQL Server's
administration tools. The tools are trying to protect you from problems associated with object
names that contain spaces and other reserved characters, as well as reserved words. For
example, in the Northwind example database that is installed on most SQL Server 2000
databases, there's a table called Order Details. Ignoring for a moment the question of whether
embedding spaces in table names is wise, you must refer to this table as [Order Details] to get
SQL Server to recognize it. However, GUI tools such as VS.NET try to insert the square brackets
for you whenever they can. We don't generally use them in code listings in this book because
square brackets are hard to touch-type and also because we're lazy.

Q2:

What is dbo?

A2:

The dbo qualifier is another of those GUI-driven helpers that often occur in SQL Server
programming. It's a way to associate a given database object with the database owner. Objects
in the database may be owned by different users; the dbo moniker is a shortcut way to say,
"Refer to the one that's owned by the database owner, whoever that may be." In a database
objects owned by multiple users isn't common, but it does happen. In databases in which all the
objects are owned by dbo, it's okay to drop the reference to dbo if you want (the GUI tools will
try to put them back in for you anyway).

[ Team LiB ]

[ Team LiB ]

Chapter 3. Getting Started with SQL Server 2000


IN THIS CHAPTER

Setting Up and Running Microsoft SQL Server 2000


Getting Started with SQL Server 2000: The Basics
In the past, many Visual Basic programmers got their first exposure to database programming through the
Jet database engine shared by Visual Basic and Microsoft Access. As soon as database applications grew
beyond a few hundred records or a few users, programmers commonly ran into limitations. Multiuser
contention for data, poor performance, and lack of advanced data and server-management features caused
many programmers to turn to an alternative architecture to resolve their database problems. That
architecture is client-server (or distributed) computing.
Client-server computing is not to be confused with multiuser computing, which Jet supports just fine. In a
multiuser architecture, a number of users share the same data over a network. That is, the database file or
files reside on a central server, which all the user workstations can access. The key is that in an architecture
such as Jet, which is not a client-server architecture, all the processing is done on the client workstations.
That is, in order to retrieve a single row defined by an SQL SELECT statement from a table containing
50,000 rows, all the rows (or at least their indexes) must first be transferred to the client workstation. No
intelligence exists on the other side of the network that can process requests and returning data.
However, client-server architecture has some sort of back endnot the body part of a programmer who sits
in a chair for 18 hours a day but rather a piece of software responsible for retrieving and caching data,
arbitrating contention between multiple users, and handling security. This software, Microsoft SQL Server,
for example, receives requests from client workstations, executes the requests on the server computer, and
then returns only the results to the client machine. Thus, in the case of requesting a single row from a table
with 50,000 rows of data, the SELECT statement is transferred to the server, the server's database software
executes the statement and then returns the single row to the client. The savings in network traffic is
obvious; another performance benefit is that server machines are usually stronger (that is, faster CPUs and
more memory) than client machines, so the actual statement execution and retrieval of the data are faster.
If you're using Visual Basic.NET (VB.NET), Microsoft SQL Server is your obvious choice for a database back
end. Not only is it a powerful and easy-to-use database system, but a copy of SQL Server also is included
with every edition of VB.NET and Visual Studio.NET (VS.NET). We clarify which editions of SQL Server are
included with which editions of VB.NET and VS.NET, after describing the different SQL Server editions.

TIP
You should avoid using Jet (MDB) databases in anything but the simplest or most memory-limited
applications. Introduction of SQL Server 2000 Desktop Engine (MSDE) eliminates the need to use
Jet databases for prototyping and/or low-cost systems. By using a freely distributable, SQL
Server-compatible database right from the start, you will never need to make query, code, or
design changes when your system needs to "grow up."

In this chapter we focus on getting started with SQL Server 2000. Our intention is to give you a whirlwind
introduction to setting up and using SQL Server to prepare you for the material and examples in the
remainder of this book. If you're new to SQL Server, the material in this chapter should be enough to get
you started and comfortable with that server. If you're familiar with SQL Server, you may still find this
chapter to be a useful refresher, and you may even learn one or two new things as well. The following is a
typical scenario.
Say that you're working as the member of a client-server or distributed development team. You have a
database server that is 95 percent functionalwhich is to say that it isn't really functional at all. You
still need to get your work done, but the server component of the application just isn't "ready for prime
time."
What's more, you may have only one or two server-side programmers at your disposal. Because
server-side programmers tend to have the most rarefied set of skills, this situation tends to happen
often in client-server development organizations. They're the hardest kind of programmer for
companies to hire and retain, and as a consequence, they can be the most stressed-out bunch of
people you'd ever encounter. Consequently, they are the hardest to get hold of when something goes
wrong. Moreover, client-side programmers often can't get their work done until server-side
programmers fix what's wrong with the server.
This is The Drama of the Gifted Server Programmer.
If you've ever been in a distributed development project involving more than two developers, you'll
understand this drama. One solution is to prototype your client-side application, using a mocked-up Jet
data source first and then hooking up your application to the server when it is ready. Designating an
ODBC data source or using an OLE DB data link are two easy ways to do that. The layer of abstraction
offered by ODBC or OLE DB permits you to easily create and use a prototype version of the database in
your application, switching to the "live" database whenever you want.
Placing one or more layers of abstraction between the client and the server also keeps client-side
programmers from overburdening the server-side programmer. For the server-side programmer, that
means exposing views or stored procedures that provide data services to clients; for VB.NET
programmers, it means creating code components that do much the same thing. For information on
strategies involving stored procedures and views, see the sections Creating and Running Stored
Procedures and Using Database Views to Control Access to Data later in this chapter.

[ Team LiB ]

[ Team LiB ]

Setting Up and Running Microsoft SQL Server 2000


Running a true database server is a significant departure from sharing a Microsoft Jet database file. You have
new concepts to learn and new things to worry about. However, SQL Server 2000 is much easier to set up
and maintain than previous versions, especially version 6.5 and earlier.
In this section we get you started with the minimum required to get a database up and running under SQL
Server 2000, which actually comes in several different editions:

SQL Server 2000 Standard Edition Basic database server, appropriate for a workgroup or
department.
SQL Server 2000 Enterprise Edition Includes all features of Standard Edition and offers added
performance and other features required to support the largest enterprises, Web sites, and data
warehousing applications.
SQL Server 2000 Personal Edition Appropriate for mobile users who are often disconnected from their
networks but need SQL Server as their local data store and for running stand-alone applications on a
client-workstation computer using SQL Server. Unlike the Standard and Enterprise Editions, which
require a server version of Windows NT or Windows 2000, the Personal Edition can also be run on
Windows 2000 Professional, NT 4.0 Workstation, and Windows ME or 98. This edition limits the server's
performance when more than five batches are being executed at the same time.
SQL Server 2000 Developer Edition Includes all the features of the Enterprise Edition but is licensed
only to developers who are developing and testing SQL Server applications and may not be used as a
production server.
SQL Server 2000 Desktop Engine (MSDE) Provides most of the functionality of the Standard Edition.
This component may be freely distributed as part of small applications or demo versions. The size of
the Desktop Engine database is limited to 2 GB, and like the Personal Edition, its performance is
limited when more than five batches are being executed at the same time. However, it doesn't include
any of the graphical development or managerial tools.

Note
Every edition of VB.NET or VS.NET includes the MSDE edition of SQL Server 2000. The
Enterprise Developer and Enterprise Architect editions of Visual Studio also include the
Developer Edition of SQL Server 2000.
Keep in mind the following important points.

MSDE doesn't include the SQL Server graphical tools described in this chapter. Thus
you won't actually be able to perform the demonstrations and samples illustrated (you
do, however, have some limited graphical data tools to access MSDE within the VS.NET

development environment).
The Developer Edition of SQL Server licenses you for development only. To create a
production application with SQL Server, you must obtain the required server and client
access licenses for SQL Server 2000.

SQL Server 2000 Windows CE Edition Used as the data store on Windows CE devices and capable of
replicating data with any of the other SQL Server 2000 editions.

Determining Installation Requirements for SQL Server 2000


To install SQL Server 2000, Microsoft says that you'll need a Pentium (or compatible) processor running at a
minimum of 166 MHZ, 95 to 270 MB of hard disk space (250 MB typical, 44 MB for the Desktop Engine), a
CD-ROM drive, Internet Explorer 5.0 or later, and a supported operating system. The memory (RAM)
requirements are as follows:

Standard Edition 64 MB minimum


Enterprise Edition 64 MB minimum, 128 MB recommended
Personal Edition 64 MB minimum on Windows 2000; 32 MB minimum on other operating systems
Personal Edition 64 MB minimum on Windows 2000; 32 MB minimum on other operating systems
Developer Edition 64 MB minimum
Desktop Engine 64 MB minimum on Windows 2000; 32 MB minimum on other operating systems
If you've actually tried to run SQL Server on a 166 MHZ processor with 64 MB of memory, please try to stop
laughing and resume reading now. These specifications are minimum requirements. SQL Server may very
well run on a machine this anemic, but in the real world the minimum requirement is the biggest, baddest
computer you can realistically afford. It is supposed to be the computer that runs your entire business;
scrimping on the hardware will only cause you grief later. If there's one area you want to consider maxing
out on your computer, it's memory. In practice, if you have a limited budget, you are usually better off
investing in additional memory than in additional CPU speed. A modest memory upgrade can go a long way
in improving your system's performance.

Note
Because this book is designed to be a survey of database-oriented solutions in VB.NET, we don't
explore every SQL Server feature. The SQL Books Online documentation that comes with SQL
Server is the best source for this detailed information. If you're looking for a book that is more
tutorial in nature, check out Microsoft SQL Server 2000 DBA Survival Guide by Spenik and Sledge
(Sams Publishing).

Installing SQL Server 2000


After you've designated a computer for use with SQL Server, you can proceed with installation. It is fairly
straightforward, with a few minor exceptions.

It takes a long time.


It asks you a lot of weird questions that most conventional applications don't ask.
We can't help you with the fact that it takes a long time, but we can give you some pointers about the
questions posed by SQL Server's setup application.
In general, and certainly for simple developmental configurations, you should accept the default options that
are offered by the dialog pages of the setup wizard. The following comments refer to the dialogs that require
a bit more thought.
In the Setup Type dialog box shown in Figure 3.1, you get to choose among typical, minimum, and custom
setups, as well as the paths to the folders for the SQL Server programs and data files. Be sure that you have
enough disk space on the drive where you store the data files and that they are on a path that is regularly
backed up.
Figure 3.1. Setup Type dialog box of the SQL Server Installation Wizard

In the Services Accounts dialog shown in Figure 3.2, the default is a Domain User account, but you may want
to use the Local System account if you aren't on a domain or have your own dedicated development server
machine. On this dialog page you can determine whether SQL Server should start automatically when
Windows is started. If you select this option, bear in mind that SQL Server will be started as a service from

Windows. Services Accounts act as if they're part of the operating system; they don't appear in the Task
Manager, and they can't be shut down like normal applications can. In the next section we give more
information on how to manage a service running under Windows, but you might also see the Controlling the
Way SQL Server Starts Up section later in this chapter.
Figure 3.2. Services Accounts dialog box of the SQL Server Installation Wizard

For a production server, it is preferable to use the default Windows Authentication Mode shown in Figure 3.3.
This mode takes advantage of the existing Windows NT/2000 user account and security mechanisms. When
an attempt to connect to the SQL Server is made, it uses the user's account information to authenticate her
and, if the user (or her group) has been granted access to the SQL Server, she is in. This approach is simple
and provides a single location for managing user accounts and groups.
Figure 3.3. Authentication Mode dialog box of the SQL Server Installation Wizard

In some situations it may be necessary to use Mixed Mode. In addition to enabling Windows Authentication,
Mixed Mode also allows SQL Server Authentication. The latter requires the definition of user accounts within
SQL Server, against which login attempts are tested. The main advantage of this mode is that it doesn't
require a trusted connection between the server and the connecting workstation, making it the mode of
choice if UNIX or Web clients are accessing the database. However, it does require additional work and
redundant account management (Windows accounts and SQL Server accounts).

Note
Often, you will find it convenient to configure a development machine in Mixed Mode so that you
can simply use the preinstalled sa (system administrator) account. Just be sure to develop a more
robust and secure approach for your production machineat the very least, be sure to assign a
good password to the sa account!

Starting and Stopping SQL Server with SQL Service Manager


The SQL Service Manager is used to start and stop SQL Server. You use it in situations where you need to
take down the server to perform certain tasks, or if you just don't want to run SQL Server on a particular
machine (on a development machine, for example).
You don't have to stop SQL Server under normal circumstances, which goes along with SQL Server's role as
an enterprise database system. The idea is that you're supposed to start it up and leave it running all the

time, come heck or high water. Yet, in certain rare instances, you must stop the server to perform certain
tasks, such as changing configuration options on the server or performing a hardware upgrade to the
computer on which the server resides. When one of these situations comes up, use SQL Service Manager to
take down SQL Server and bring it back up again.
SQL Service Manager doesn't have to be running for SQL Server to do its work. The SQL Service Manager
exists merely to give you control over the activation and deactivation of your server. After your server is in
production mode, you probably won't often use SQL Service Manager.
When you launch it (by selecting its icon in the SQL Server program group), SQL Service Manager looks like
the window shown in Figure 3.4.
Figure 3.4. SQL Service Manager in its pristine state, in which SQL Server is running

If SQL Server is running, the status indicator is a green arrow; if it's not running, the indicator is a red
square. To start SQL Server, click on the Start/Continue button; to stop it, click on the Stop button. It really
is easier than making toast.

Controlling the Way SQL Server Starts Up


After you set up SQL Server, the operating system automatically launches it when your server computer is
started. Through the Services control panel, you can control whether SQL Server always starts when your
computer starts. To view the current state of SQL Server and control how it runs when your computer is
started, follow these steps.

1. Launch the Windows Control Panel. Select Administrative Tools from the Control Panel.
2. Select Services from the Administrative Tools.
3. The Services control panel appears. Scroll through the list of services until you find MSSQLServer.
If you just installed SQL Server on your machine, the MSSQLServer service status is Started and its start-up
is Automatic. To stop the MSSQLServer service from the Services control pane:

1.
2.
3.

1. Click on the Stop button


2. After a few seconds, SQL Server is stopped
3. To restart SQL Server, click on the Start button in the Services control panel

Note
Starting and stopping SQL Server by using the Services control panel is essentially the same as
starting and stopping it from the SQL Service Manager, albeit less colorful.

[ Team LiB ]

[ Team LiB ]

Getting Started with SQL Server 2000: The Basics


After installing it, you have several minimum tasks to complete before SQL Server begins storing and
retrieving data, including:

Creating one or more databases


Creating tables in a database
Creating views and stored procedures that govern how data is retrieved from a database
Setting up user accounts and security groups
All the tasks you need to perform are described in this section. Most can be handled without writing code by
using the SQL Server Enterprise Manager utility in SQL Server 2000.

Running SQL Server Enterprise Manager


You can perform many of the most common database configuration and setup tasks in SQL Server by using a
utility called SQL Server Enterprise Manager. Because of its power and ease of use, SQL Server Enterprise
Manager is one of the most important tools in Microsoft SQL Server 2000. The utility makes the database
administrator's task easier by putting an easy-to-use graphical interface on top of a number of chores that
were formerly accomplished (and can still be accomplished) with somewhat arcane SQL commands.
You launch Enterprise Manager from its icon in your Microsoft SQL Server program group. After you've
launched it, you'll gain access to the SQL Server(s) available on your network. The following sections in this
chapter describe some of the most common tasks you will perform with Enterprise Manager in a production
application.

Note
In a new SQL Server installation, you have only one username, sa, and it has no password. You'll
obviously want to change this situation soon because a username without a password is like a
bank vault without a lock. For more information on how to manage user accounts and security in
SQL Server, see the Managing Users and Security in SQL Server Enterprise Manager section later
in this chapter.

The first time you run Enterprise Manager, you must register your SQL Server installation. Doing so lets
Enterprise Manager know which SQL Server you want to work with; it also lets you administer more than one
SQL Server installation. You register a SQL Server in the Registered SQL Server Properties dialog box, as
shown in Figure 3.5 .

Figure 3.5. Enterprise Manager's Registered SQL Server Properties dialog box

If you're attempting to register a SQL Server running on the machine you're working on, it is easiest to use
the server name (local ), including the parentheses. If you're trying to connect to a SQL Server over a LAN,
it is easiest to use the Server browse ( ) button to browse the servers available on your network.

Tip
The Registered SQL Server Properties dialog contains an important but often hard-to-find option
checkbox: Show system databases and system objects. When this option is unchecked, your
system databases and objects remain hidden in the various Enterprise Manager windows. That
reduces the clutter when you're working only with your application's tables and files. However, if
you want to see these system objects, just return to this dialog to edit the properties and check
this option.

After you've registered the server you want to work with, click on the OK button in the Registered SQL
Server Properties dialog. (You have to do so only once. SQL Enterprise Manager then remembers how to
connect to the server you want to work with.) Your server's name also appears in the Microsoft SQL Servers
console window (along with any other servers that you've registered). On a machine with a connection to a
local SQL Server, the Microsoft SQL Servers console window looks like that shown in Figure 3.6 .

Figure 3.6. Microsoft SQL Servers console window with a local SQL Server in Enterprise Manager

Creating a Database with SQL Enterprise Manager


After registering your server, you're ready to get down to business. The next step is to create a database
and begin populating it with database objects, such as tables, views, and stored procedures.
Although you can create databases by using SQL code, doing so with Enterprise Manager is much easier. The
reason is that Enterprise Manager lets you design most database objects graphically, shielding you from the
complexity of SQL code. To create a new database with SQL Server Enterprise Manager do the following.
1. Right-click on the Databases folder in the Enterprise Manager's Microsoft SQL Servers console window.
2. Select New Database from the pop-up menu. The Database Properties dialog appears, as shown in
Figure 3.7 .
Figure 3.7. Enterprise Manager's Database Properties dialog box

3. Type the new database's name in the Name text box. (For the examples in this chapter we use the
name Novelty.)
4. By default, the data is stored in a file named database-name _Data.mdf, and the database transaction
log is stored in a file named database-name _Log.ldf. These default namesand the path to where the
files are storedcan be modified by changing the File Name and/or Location on the Data Files and
Transaction Log tabs. The Data Files tab is shown in Figure 3.8 .
Figure 3.8. The Data Files tab of the Database Properties dialog box allows specification of
file location and growth properties.

Note
Unlike earlier versions of MS SQL Server, there is no need to predetermine and allocate the
size of the data and log files. SQL Server 2000 allows for automatic growth of the files as
necessary, in increments of either a fixed number of megabytes or a percentage of the current
file size. You should specify a maximum file size so that the file doesn't grow uncontrolled until
the entire hard disk is full.
5. Click on OK. On the hard disk drive, two new files have been createdNovelty_Data.mdf and
Novelty_Log.ldfeach with the default initial size of 1 MB.
6. The new database is created and the Database Properties dialog box closes. The new database should
appear in the Databases folder of the Microsoft SQL Servers console window.

Note
On the General tab, you can also specify a default collation for the database. A collation
determines the character set and the rules by which characters are sorted and compared. This
specification is particularly important when you're dealing with data in languages other than
English, but it is also used to specify whether case-sensitive or case-insensitive sorting and

comparing is to be used.

Creating Tables in a SQL Server Database


In Microsoft SQL Server, you can create tables in two ways:

1. Use SQL Data Definition Language (DDL). We introduced this technique in Chapter 2 .
2. Use the graphical table-building features of SQL Server Enterprise Manager.
Both techniques have advantages and disadvantages. Using SQL DDL commands is more complicated than
building tables graphically, particularly if you haven't worked with SQL extensively in the past; using SQL is
more flexible and forces you to write and maintain code to create your database. On the one hand, DDL
commands lend themselves to being automated, as in creating a script to build a database with a single
click. They also allow functionality and options that are not exposed by the Enterprise Manager's GUI and are
a (crude) method of documenting the database schema.
On the other hand, using the SQL Server Enterprise Manager enables you to create a database structure
quickly and easily, using all the graphical user interface advantages. However, Enterprise Manager isn't as
easy to automate.
Some database programmers prefer to use SQL code to create their databases because they always have a
written record (in the form of their SQL DDL code) of what went into creating the database. The technique
you use is a function of your personal preference, your or ganization's standards for development, and the
kinds of database applications you're likely to create. We introduce both techniques in the following sections.

Tip
SQL Server 2000 has a feature that will generate the SQL DDL code for the objects in an existing
database. You can view it by right-clicking on a database in the Databases folder in the Microsoft
SQL Servers console window, selecting All Tasks, and then selecting Generate SQL Script to bring
up the Generate SQL Scripts dialog window.

Using Enterprise Manager to Create Tables in SQL Server


After you've created a database in SQL Server, you can use SQL Server Enterprise Manager to create tables
in the database. To create a table in a database, follow these steps.
1. In Enterprise Manager's Microsoft SQL Servers console window, expand the outline node that
represents the database in which you want to create a table.
2. Right-click on the Tables node.
3. Choose New Table from the pop-up menu. The Design Table dialog box appears, as illustrated in Figure
3.9 .
Figure 3.9. Enterprise Manager's Design Table dialog, which allows designing tables in a

database

Note

4.
5.

6.
7.
8.

The caption of the Design Table dialog box will begin with New Table rather than Design Table
when you display this dialog for the first time.
Start by creating a table to store customers. To do so, in the Design Table dialog, click in the column
labeled Column Name. Then type the name of the first field for this table: FirstName.
Press Tab to move to the next column, Datatype. In this column, make the data type a variable-length
text field, or varchar , by selecting varchar from the drop-down combo box. The varchar data type
is generally used in SQL Server to store relatively small pieces of string data.
In the next column, enter the number 20. Doing so limits the number of characters in the FirstName
field to 20.
The Allow Nulls column determines whether a field allows null values. If the box is checked, null values
can be entered into the field. For the FirstName field, the Allow Nulls box should be checked.
Enter additional field definitions and data types into the grid one at a time. When the table definition is
done, it should look like Figure 3.10 .
Figure 3.10. The Design Table dialog box containing the field definitions for the new table

Note
At this point, you might want to create a field that acts as a unique identifier for each record in
the table. In SQL Server, this type of field is referred to as an identity column. Now is the time
to do that because you can't create an identity column for the table after creating the table and
adding data to it. The reason is that key fields can't store null values, and you can only
designate non-null fields in a table before it contains any data. SQL Server isn't as flexible as
Microsoft Access is in this respect, but it's the price you pay for the increased performance and
scalability that SQL Server offers. In the following section we provide more information on
creating identity columns when you create the table.
9. When you've finished designing the table, click on the Save button on the toolbar at the top of the
dialog box.
10. The Choose Name dialog appears. Type the table's name in the box, then click on OK. You can use
nearly any name you want, but for the examples in this chapter, we use the table name tblCustomer.
11. The newly created table should appear in the Microsoft SQL Servers console window.
Creating an Identity Column to Uniquely Identify Records
It is useful (although not required) for every record to have a piece of information that uniquely identifies it.
Often, this unique identifier has nothing intrinsically to do with the data represented by the record. In SQL
Server, a column can be defined to be an identity column (analogous to the concept in Jet of an AutoNumber
field ). An identity column automatically assigns a unique numeric value to a column in each record as the
record is created.
If you're familiar with Jet's AutoNumber field, it is useful to contrast it to SQL Server's identity column. A

SQL Server identity column is different from, and in some ways more flexible than, a Jet AutoNumber field.
Identity columns in SQL Server have the following attributes.

They can be of any numeric data type (in Jet, they can only be long integers).
They can increment themselves by any amount you specify (in Jet, they can only increment themselves
by 1or a random amount).
They can start numbering at any value you specify (in Jet, they always begin at 1).
They can be overridden. This feature allows you to insert specific numbers into identity columnsto
reconstruct a record that was accidentally deleted, for example. (In Jet, identity columns are always
read-only.)
A SQL Server identity column is less flexible than a Jet AutoNumber field in one respect: If you're going to
create an identity column for a table, you must do it when you create the table (before adding any data).
The reason is that SQL Server requires that any field created later must allow null values; again, non-null
fields can only be created before the table contains any data.
To create an identity column using SQL Server Enterprise Manager, follow these steps.
1. In the Design Table dialog box, create a new field called ID. Make its data type int. Remember that the
SQL Server int data type is four bytes long, just like the Visual Basic.NET Integer data type.
2. Uncheck the Allow Nulls box. Doing so ensures that null values can't be inserted into this column; it
also makes this column eligible to act as an identity column.
3. The bottom portion of the Design Table dialog box displays a property page for the properties of the
currently selected column in the table. Click on (or tab to) the Identity field in the property page.
4. In the Identity list box, select Yes. Optionally, you can set values in the Identity Seed and Identity
Increment boxes; these boxes govern where automatic numbering starts and by what value each
successive number increases.
When your identity column has been set, the Design Table dialog box looks like that shown in Figure 3.11 .
Figure 3.11. Creating an identity column in the Design Table dialog box in SQL Server Enterprise
Manager

Bear in mind when you're using identity columns in SQL Server that they're not guaranteed to be sequential.
For example, if Kevin tries to create a record designated ID number 101, then Laura creates the next record
(ID number 102), and Kevin's insert transaction fails, a record with ID number 101 will never be created.
That may not be such a big deal, especially in a database application that never exposes the value of a
primary key to the user (a design principle you should strive for). But remember that "lost" identity values
are a possibility. If you use identity columns to give each record uniqueness, don't be surprised if you browse
your data someday and discover that there's no invoice number 101.
Using Other Methods to Generate Primary Keys
Your database tables aren't required to have primary keys. You'll probably find, however, that having a
primary keyeven if it's a bogus value made up by the server at the time the record was createdis a good
idea. Recall that you need a primary key to do important things like joining multiple tables in queries.
Primary keys are also handy ways to refer to records in the user interface. Rather than passing the whole
record from one procedure to the next, having a primary key lets you pass a minimal amount of data
pertaining to the record.
One alternative tactic for generating primary keys is to generate a unique random value in the primary key
field for each record as it is created. This tactic is used for tables containing Auto Number fields that have
been converted, or upsized, from Microsoft Access to SQL Server. It's also the technique used for replicated
Access databases to avoid collisions between records that are entered by remote, disconnected users.
Another tactic is to store a counter value in a temporary table and use that value to set each new record's
primary key column as the record is created. This tactic involves a transaction that reads the counter table's
current value, using it to populate the primary key of the new record, and then increments the value in the

counter table in one atomic operation. This technique has the advantage of providing a sequential numbering
system over which you have complete control. The disadvantage (when compared to the simpler technique
of creating an identity column) is that you need to write a stored procedure and add tables to your database
to implement it. Storing a counter value also creates contention for a single table, which could cause a
performance problem in a heavily loaded system.
A final tactic is to derive a key from cues supplied by the data; for example, the key for a record for a person
named Vito Polito might be VP001. If another person with the initials VP comes along, the system would give
that record the key VP002, and so on. This tactic has the advantage of providing a key that isn't totally
dissociated from the data, but it does require more coding on your part (in the form of a stored procedure
executing as a trigger, described later in this chapter).
Marking a Column as the Primary Key
When you create an identity column, you'll almost certainly want to designate that column as your table's
primary key. You can do that in SQL Server Enterprise Manager's Design Table dialog box, as follows.
1. Select the row containing the column (field) that you want to serve as the table's primary key.
2. Click on the Set Primary Key button (key icon) on the toolbar. The primary key index is added to the
table definition, and the Design Table dialog looks like that shown in Figure 3.12 . Rows with the key
icon in the first column of the grid are the fields that comprise the primary key for the table. Note also
that any field can serve as a table's primary key, not just an identity column.
Figure 3.12. Designating a column as a table's primary key in SQL Enterprise Manager

Note
You can designate multiple fields as a table's primary key; it is known as a concatenated key. You
do so in situations when, for example, you want the first and last names of every person in the
table to be unique. That would prevent the name Amy Rosenthal from being entered in the table
twice, but it wouldn't prevent other people named Amy from being entered into the table. To
designate multiple fields for the primary key, select the desired rows by using <Ctrl> Click.

Using SQL Query Analyzer to Access a Database


You can issue SQL commands to SQL Server through a utility called SQL Query Analyzer (formerly ISQLW).
With SQL Query Analyzer, you can not only run SQL queries, but also perform other actions on records, such
as update, delete, and insert. You can also use SQL Query Analyzer to perform sophisticated database and
server management tasks such as creating databases, tables, views, and stored procedures.
If you're familiar with the SQL syntax, getting started with SQL Query Analyzer is easy. (Mastering it, on the
other hand, can be trickywhich is why, for many of the more complicated examples in this chapter, we rely
on SQL Query Analyzer's graphical cousin, SQL Enterprise Manager.)
To issue commands to the database by using SQL Query Analyzer, launch SQL Query Analyzer from the
Microsoft SQL Server program group (or the Tools menu of the SQL Server Enterprise Manager). SQL Query
Analyzer displays the Connect to SQL Server dialog box. Then choose the server you want to connect to,
type in a username and password, and click on Connect. The SQL Query Analyzer main window appears, as
illustrated in Figure 3.13 . When commands are executed, an additional multitab pane is added to display
results, messages, and various statistics and plans. We present the results and messages tabs in later
examples in this chapter.
Figure 3.13. The main window of SQL Query Analyzer

Once you've launched SQL Query Analyzer, you can begin issuing commands to the database in SQL. To be
sure it's working properly, though, test the connection to the database before attempting to do anything
else. You can do so with the pubs database that ships with SQL Server, as follows.
1. Tell SQL Server which database you want to use by executing the SQL USE command, followed by the
name of the database you want to use. In SQL Query Analyzer's Query window, type

USE pubs

Note
SQL commands can be entered in either uppercase or lowercase, but by convention, SQL
keywords are entered in uppercase. In this chapter we follow that convention.
2. Execute the command by pressing F5 or by clicking on the green Execute Query button on the toolbar.
SQL Query Analyzer switches to the Messages tab so that you can view SQL Server's response. If
everything worked properly, SQL Server responds with the terse message:

The command(s) completed successfully.

3. Clear both of the window panes by selecting Edit, Clear Window (or by using the keystroke shortcut
Ctrl+Shift+Delete) in each one.
4. Next, run a simple query against the pubs database to be sure that it's returning data. Type the

3.
4.
following SQL code in the Query window:

SELECT *
FROM authors

5. Execute the query by pressing F5 or by clicking on the green Execute Query button on the toolbar. If
everything worked correctly, SQL Query Analyzer shows you the results of the query in the Grids tab,
as illustrated in Figure 3.14 . In the Messages tab, the number of affected rows is shownfor example,

(23 row(s) affected)

Figure 3.14. Results of a test query against the pubs database as displayed in the SQL Query
Analyzer window

Tip
In addition to executing all the script commands currently in the Query window, you can execute
one or more of the lines by selecting (highlighting) the lines to execute and then executing them
by pressing F5 or by clicking on the green Execute Query button on the toolbar. Doing so allows
you to easily repeat portions of commands, perhaps after modifying them.

Viewing All the Objects in Your Database By Using sp_help

SQL Server enables you to see all the objects available in any database. The system gives you this capability
through a stored procedure a bit of code stored and executed on the server.
The stored procedure sp_help enables you to browse databases. You execute sp_help the same way you
execute any SQL queryentering it by using SQL Query Analyzer's Query window.
To get a road map to the objects in your database by using sp_help , follow these steps.
1. Switch to the Query window in SQL Query Analyzer; clear the query box by using Ctrl+Shift+Delete if
necessary.
2. In the query box, type

sp_help

3. Execute the command. SQL Server responds by generating a list of database objects similar to that
shown in Figure 3.15 .
Figure 3.15. Typical response to sp_help for the pubs database

Note
You can write your own stored procedures in SQL Server. For more on this topic, see the Creating
and Running Stored Procedures section later in this chapter. Also, although the stored procedures
you create are usually local to an individual database, other stored procedures are provided by the
system and are available to every database in a SQL Server. For another example of such a
system-provided stored procedure, see the Displaying the Text of an Existing View or Stored
Procedure section later in this chapter.

Using an Existing Database


You can work with a particular database by executing the USE command in SQL Query Analyzer. When you
designate a particular database with USE, all the subsequent SQL commands you issue are performed
against that database. This point is important to remember because you can easily issue commands
inadvertently against the wrong database if you're not careful.

Note
Inadvertently issuing commands against the wrong database is one reason that you should
designate a database other than master to be your server's default database (that is, only server
configuration data should go into the master database). Every server login can define its own
default database. You can change the default database when you create the login or anytime
thereafter; the technique for doing so is described in the Creating and Maintaining Logins and
Users section later in this chapter.

For example, to switch from the master database to the Novelty database, do the following.
1. In SQL Query Analyzer's Query window, enter

USE novelty

or select it from the database listbox on the SQL Query Analyzer window toolbar.
2. If the Novelty database exists, SQL Server responds with:

The command(s) completed successfully.

3. If the database doesn't exist, you receive

Server: Msg 911, Level 16, State 1, Line 1


Could not locate entry in sysdatabases for database 'novelty'.
No entry found with that name. Make sure that the name is entered_ correctly.

Remember, if you forget the name of a database (or its spelling), you can look it up in SQL Enterprise
Manager or list the available databases using the sp_helpdb stored procedure, which returns information
about a specified database or all databases.
Issuing SQL Commands to SQL Query Analyzer
You can execute any type of SQL command against a database by using SQL Query Analyzer, which has

some advantages over other methods of sending commands to SQL Server. Remembering SQL syntax can
be more difficult than using SQL Enterprise Managerparticularly when you're using seldom-executed
commands such as those for creating databasesbut SQL Query Analyzer has the advantage of being
interactive. The utility responds immediately to commands you issue. SQL Query Analyzer also has a number
of features not offered by SQL Enterprise Manager, such as the capability to run queries and stored
procedures.
In Chapter 2 we discussed the syntax of most of the basic SQL commands you're ever going to want to issue
against a relational database. Most of this information is also applicable to running queries and creating
database structures in SQL Server.

Using Database Views to Control Access to Data


A view is a query definition stored in a database. It is conceptually similar to a query definition in the
Microsoft Jet database engine, in the sense that it is a stored definition that resides in the database and
gives client applications access to data.
You use views in situations when you want to give users access to data but don't want to give them direct
access to the underlying tables. However, once defined, views can be thought of as virtual tables and used
whenever a database table would be used. The fact that a view looks exactly like a table to a client
application gives you a number of advantages. For example, when users access data through views rather
than through direct access to tables:

You can change the table's design without having to change the views associated with it
You can restrict the number of rows or columns returned by the view
You can provide simple access to data retrieved from multiple tables through the use of joins contained
in the view
To take full advantage of views, you should have a security strategy for your database. That permits you to
attach security permissions to views instead of tables, which makes it easier to grant and revoke permissions
from different types of users. We discuss security in the Managing Users and Security in SQL Enterprise
Manager section later in this chapter.
Creating Views in SQL Server Enterprise Manager
As with many of the database objects that you can create in SQL Server, you can create views in either SQL
Query Analyzer or SQL Server Enterprise Manager. Both techniques are fundamentally similar; SQL Server
Enterprise Manager's technique is slightly more graphical, whereas SQL Query Analyzer's technique is
interactive, permitting you to test a view as soon as you create it.
To create a view in SQL Server Enterprise Manager, do the following.
1. From the Server Manager window, right-click on the Views node in the database in which you want to
create a view. For this example, use the pubs database.
2. Select New View from the pop-up menu. The Design View window appears, as shown in Figure 3.16 .
Figure 3.16. Creating a new view in SQL Server Enterprise Manager

Note
The caption of the Design View dialog box will begin with New View rather than Design View
when you display this dialog for the first time.

Tip

3.
4.
5.
6.
7.
8.

The same graphical tool for designing views can be used to design queries. It can be accessed
by right-clicking on a database table, selecting the Open Table menu item, and then selecting
Query from the Open Table submenu. You can't save the query you design, however, because
a stand-alone query is not an SQL Server database object. However, the Query Designer is still
useful for developing and testing stored procedures and for retrieving a particular set of data
from the database.
Display the Add Table dialog box by clicking on the Add Table button on the Design View toolbar or by
right-clicking on the Diagram (top) pane and then selecting Add Table from the pop-up menu.
Select the jobs table and click on the Add button (or double-click on the jobs table) to add the jobs
table to the view.
Select the employee table and click on the Add button (or double-click on the employee table) to add
the employee table to the view.
Click on the Close button to dismiss the Add Table dialog.
Select fields from each table. Check the job_desc field from the jobs table and the fname and lname
fields from the employee table.
Test the view by clicking on the Run (exclamation point) button on the Design View toolbar or by rightclicking on any of the window's panes and then selecting Run from the pop-up menu. The results are

8.
shown in Figure 3.17 .
Figure 3.17. Results of creating and running a new view in the Design View window of the
SQL Server Enterprise Manager

9. Save the new view by clicking on the Save button on the Design View toolbar or by right-clicking on
the any of the window's panes and then selecting Save from the pop-up menu.

Note
You might want to use a naming convention such as appending the letters "_view" to the name
of a view (to get, for example, SpecialCustomers_view). Doing so makes it clearer that what
you're working with is a view rather than a table. Of course, you can use any naming
convention you wantor none at all.
10. When the Save As dialog appears, type the view's name in it, then click on OK. You can use nearly any
name you want, but for this example in this chapter, we used the view name EmployeeJobs_view.
With creation of the view EmployeeJobsview, we have an object that we can deal with as if it were a simple
tablethe fact that it is actually the result of a join of two tables is hidden. Thus we can have shorter,
simpler SQL statements based on this one, while still having a correctly designed (normalized) database.
Similarly, we can create a view that is based on calculations or manipulations of the data in a table. For
example, suppose that we normally want to retrieve the names of employees as a single field that combines

first and last names in the format lname, fname. We could create the view by using

CREATE VIEW EmployeeNames_view AS


SELECT lname + ', ' + fname AS Name
FROM employee

Using Views in a Production Application


A view is a construct that gives you greater control over the retrieval of data in your SQL Server database.
This control manifests itself in various ways. By limiting the number of rows or columns retrieved by the
view, you control the data that a particular user can retrieve. This control can enable you to do neat tricks,
such as create selection criteria that are known only to you or lock out users from particular subsets of your
data based on their security permissions. You can do these things because each object in the
databaseincluding tables, views, and stored procedurescan be associated with individual users or security
groups. In a database that takes advantage of views and stored procedures, direct access to tables is
generally limited to the database administrator; client applications are limited to accessing views or stored
procedures that, in turn, are responsible for retrieving data from base tables.
A Hide column is an example of an application of this technique. If the Hide column of a record is set to True,
that row is never returned to a user; it's filtered out by the view responsible for retrieving the data from the
database. Client applications never know that anything has changed because they're always issuing requests
to the same view.
Accessing databases through views, rather than through direct access to tables, is an important component
of any robust production SQL Server database installation. In addition to enabling you to limit the number of
rows and columns retrieved, shielding your database tables with views gives you the capability to change
things without breaking client applications.
This process of inoculating your database design from modifications caused by changes in business rules can
be taken a step further by introducing middle-tier components. Conceptually, such components are similar to
views and stored procedures in that they shield your database design from changes in your software
application's mission, but they have advantages over views and procedures stored in SQL Server. Among
these advantages are the fact that they're easier to program, they return data in the form of objects instead
of rows and columns, and they aren't tied to any one database management system or programming
language. (See Chapter 12 for more about middle-tier components.)
Creating Views with an SQL Query Analyzer Batch
You can create views with SQL Query Analyzer. The process for doing so is less graphical but more flexible
than creating views in Enterprise Manager. To create a view of tblEmployee that doesn't contain the
(confidential) Salary field, using SQL Query Analyzer, follow these steps.
1. Enter the following code in the Query Analyzer query window. (This batch is written in such a way that
it will create the view whether or not it currently exists.)

USE novelty
GO
DROP VIEW Employee_view

GO
CREATE VIEW Employee_view as
SELECT ID, FirstName, LastName, DepartmentID FROM tblEmployee
GO
SELECT * FROM Employee_view

2. Run the batch either by pressing F5 or by clicking on the green Execute Query button on the toolbar.
The view is created and executed; the results are shown in the Grids (or Results) tab.
3. Verify that the view has been created by going to the SQL Server Enterprise Manager and selecting the
Views tab for the Novelty database.

Note
Earlier in this chapter we showed how to design a view graphically in the SQL Server Enterprise
Managerright-clicking on a view and then selecting Design View from the pop-up menu. You can
also edit the text of a view by double-clicking on a view and then modifying the text in the dialog
box.

In addition to being an example of how to create a view with SQL Query Analyzer, the preceding code is
another example of executing a batch in SQL Query Analyzer. The batch not only creates the view, but it also
switches to the correct database and runs the view when it's finished creating it. This result confirms that the
view is doing what you think it's supposed to be doing.
You can create batches to simplify the process of creating database objects by using SQL Query Analyzer; in
most cases when you're creating database objects, you want to do more than one thing at once. Dropping a
table, then creating a table, and then populating it with sample data is a typical use for an SQL batch;
checking to see if a user account exists and then creating that user account with a default password is
another use for itamong many others.

Creating and Running Stored Procedures


Although views give you a great deal of control over how data is retrieved from your SQL Server database,
an even more powerful technique involves the use of stored procedures. A stored procedure is similar to a
view, except that it gives you the capability to perform more complex operations on data. For example,
stored procedures let you:

Perform calculations on data


Take or return parameters
Implement application logic that requires multiple steps or queries, using a database-oriented
programming language
Return data in a way that is easy and more efficient to program from the client side
None of these features are available with traditional views.

In some ways, it may be better to think of a stored procedure as a special kind of procedurehence the
name. It is called a stored procedure because it is stored in the database itself, rather than being part of the
application that runs on the client machine or on the application server. The preceding list indicates that a
stored procedure can range from a simple one-line query to a routine that performs a series of complex
queries and operations before returning a result.
Stored procedures are written in their own database-oriented and database-specific programming language.
That language has most, if not all, the programming constructs that you would expectalthough sometimes
the syntax is somewhat arcane. The language used in SQL Server is called Transact-SQL.

Note
Microsoft has stated that future versions of SQL Server will allow writing stored procedures in any
language supported by the .NET platform (such as Visual Basic.NET), rather than only in TransactSQL. That will make it easier for developers to move from one aspect of application development
to another without having to learn the syntax of a new programming language.

Although this section is by no means an exhaustive description of all the commands available to you for
stored-procedure programming, it gives you the basic information you'll need about how stored procedures
work, why they're useful, and how you can incorporate them into your applications built on SQL Server.
Creating Stored Procedures in SQL Server Enterprise Manager
You can create stored procedures in SQL Server's Enterprise Manager by doing the following.
1. In SQL Server Enterprise Manager's Microsoft SQL Servers console window, right-click on the Stored
Procedures node under the database with which you're working. For this example, use the pubs
database.
2. From the pop-up menu, select New Stored Procedure. Its Properties window appears.

Note
Although the Stored Procedure Properties window looks like a fixed-size window, it actually can
be resized from one of its edges or corners, like other resizable windows. Thus you can resize
the window appropriately for the amount of text to be displayed.
3. Write the text of the procedure, as illustrated in Figure 3.18 .
Figure 3.18. Creating a stored procedure in SQL Server Enterprise Manager

4. When you're done with the procedure, click on the OK button at the bottom of the Stored Procedure
Properties window.
Running a Stored Procedure from SQL Enterprise Manager
You can run stored procedures (as well as views and other SQL commands) from within SQL Server
Enterprise Manager. Doing so is helpful when you want to test procedures or views that you've created. To
test a stored procedure in SQL Enterprise Manager, follow these steps.
1. Select SQL Query Analyzer from SQL Server Enterprise Manager's Tools menu. The SQL Query Analyzer
application is launched.
2. In the Query window, type the name of the stored procedure that you want to execute. For example,
to execute the stored procedure you created in the preceding example, type

ProcEmployeesSorted

3. Execute the query by pressing F5 or by clicking on the green Execute Query button on the SQL Query
Analyzer toolbar. The procedure executes and (if there is any data in the table) returns a result set in
the Grids (or Results) tab.
4. Select the Stored Procedures node in the Enterprise Manager's SQL Servers console window to verify
that the newly stored procedure has been created (you may need to click on the Refresh button on the
toolbar to force the Servers console window to be updated).
Of course, you can run a stored procedure by directly running the SQL Query Analyzeryou don't need to
start it from within the SQL Server Enterprise Manager.

Creating Stored Procedures in SQL Query Analyzer


The steps for creating stored procedures in SQL Query Analyzer are nearly identical to the way you create
them in SQL Enterprise Manager.

Note
Be sure that you create the stored procedure in the Novelty database. It's easy to forget to switch
to the correct database (using the USE command or listbox in SQL Query Analyzer) before issuing
commands against it. Creating stored procedures with SQL Server Enterprise Manager makes
committing this error harder.

To create a stored procedure in SQL Query Analyzer, execute the Create Procedure command.
1. In SQL Query Analyzer, enter the following code in the Query window:

CREATE PROCEDURE GetCustomerFromID


@custID int
as
SELECT * from tblCustomer
WHERE ID = @custID

2. This code creates a stored procedure called GetCustomerFromID . It takes a parameter, @custID ,
and returns a record for the customer that matches the @custID argument. (Because the ID field is
tblCustomer's primary key, this procedure will always return either zero or one record.)
3. Execute the command to create the stored procedure.
4. Return to the Query window and test your stored procedure by running it. To run it, try to retrieve a
record from the table by typing the code

GetCustomerFromID 22

SQL Server responds by returning the record for customer ID 22, as illustrated in Figure 3.19 .
Entering a different customer ID as the parameter value to the stored procedure will give you a
different record in return.
Figure 3.19. A single customer record returned by the stored procedure GetCustomerFromID
in SQL Query Analyzer

The procedure obviously returns data only if there is data in the table to retrieve.

Note
Now might be a good time to load the database with some customer data. A text file script named
CustomerData.sql that loads customer data into the Novelty database is available at
http://www.awprofessional.com/titles/0672323435 . Additional scripts are available there for
loading data into other database tables.

Displaying the Text of an Existing View or Stored Procedure


You can use the stored procedure sp_helptext to display the code for a view or stored procedure. To
display this data, enter sp_helptext , followed by the name of the database object that you want to view.
The SQL Server processor then returns the full text of the view or stored procedure. For example, to see the
code for the view Employee_view that you created in the earlier section on views, do the following.
1. In SQL Query Analyzer's Query pane, type

sp_helptext Employee_view

2. Execute the stored procedure by pressing F5 or by clicking on the green Execute Query button on the
SQL Query Analyzer toolbar. The code that defines the stored procedure is returned as results in the
Grids tab, as illustrated in Figure 3.20 .
Figure 3.20. Displaying the text of a view using the stored procedure sp_helptext

Creating Triggers
A trigger is a special type of stored procedure that's executed when data is accessed in a particular table.
You can think of triggers almost as event procedures that execute when data is updated, deleted, or inserted
into a table.
You generally use triggers when you need to do something complicated to your data in response to some
kind of data access. For example, you might use a trigger to keep a log every time that certain information
in the database is changed, or you might use one to create a complicated default value for a field in a new
record, based on queries of one or more tables.
You shouldn't use triggers for simple defaults; instead, you should use the default command or property. You
shouldn't use them to maintain referential integrity; you should use SQL Server's inherent referential
integrity constraint features for that. When you need to do something that goes beyond what's possible with
SQL Server's feature set, you should consider using a trigger.
For example, you can use triggers to provide a unique value in a column to serve as a record's primary key;
this tactic is used in Microsoft Access Upsizing Tools, which applies a trigger to generate a random primary
key for each record. (You can also use identity columns for this purpose, as discussed previously in this
chapter.) An example of such a trigger is

CREATE TRIGGER tblCustomer_ITrig ON dbo.tblCustomer


FOR INSERT
AS
DECLARE @randc int, @newc int

SELECT @randc = (SELECT convert(int, rand() * power(2, 30)))


SELECT @newc = (SELECT ID FROM inserted)
UPDATE tblCustomer SET ID = @randc WHERE ID = @newc

Note
For each of these triggers to work properly and be able to update the ID column, you must reset
the ID column so that it is specified as not being an identity column. To do so, you have to return
to the Design Table dialog and set the ID column's Identity property to "No".

Creating a random number to uniquely identify a record is by far the simplest technique for generating a
primary key. However, it has two drawbacks. First, the primary keys are generated in no discernable order,
which may seem like a cosmetic problem. But, if you're trying to create an invoicing system, it's helpful to
know that invoice 20010 follows invoice 20009.
The other and potentially more serious problem is the fact that there's no guarantee that the randomly
generated primary key is actually going to be unique. The reason is that the trigger doesn't check to see if
the random number it came up with has been used by some other record in the database. Granted, a
random integer has a very small chance of being generated twice by the system (because an SQL Server
integer data type is a four-bit whole number that can store values in the range of about negative 2.1 billion
to about positive 2.1 billion).

Business Case 3.1: Creating a Trigger That Enables Soundalike Searches


Brad Jones, president of Jones Novelties, Incorporated, has approved the preliminary work of his database
developer. She is now ready to tackle the next kind of database problem: queries on people's names that
involve misspellings and homophones (sound the same, but spelled differently), which can be difficult to
handle. Is a search being conducted on Smith when the person spells it Smythe? Is it McManus or
MacManus? Anyone having an unusual last name knows the problems that such similarities can be.
Jones's database developer recognizes that she's going to run into this kind of problem, so she decides to
take advantage of a function of SQL Server to resolve this problem. This function, soundex() , converts a
word to an alphanumeric value that represents its basic sounds. If she stores the soundex value of a name at
the time she creates it, she can then search on the soundex value of the name in a query. The query returns
more records, but it returns all the records that match the criterion.
Implementing this feature in the Jones Novelties database requires several steps:

Altering the tblCustomer table to accommodate a new LastNameSoundex field


Running an update query to supply soundex values for existing records in the tblCustomer table
Creation of a trigger that populates the LastNameSoundex field when the record is created or changed
Creation of a stored procedure that returns all the customers whose last name sounds like a particular
value

The database developer begins by altering the tblCustomer table to accommodate a soundex value for each
record in the database. She issues the following command to SQL Query Analyzer:

ALTER TABLE tblCustomer add


LastNameSoundex varchar(4) NULL

Next, she runs an update command that gives soundex values to records that are already in the database,
which she only has to do once. She runs the update by issuing the following SQL command to SQL Query
Analyzer:

UPDATE tblCustomer
SET LastNameSoundex = soundex(LastName)
GO
SELECT LastName, LastNameSoundex
FROM tblCustomer
GO

Including the SELECT statement in the batch after the update isn't necessary, but it's there if the database
developer wants to confirm that the operation worked and see which data it changed.
Now she can create the trigger that will insert a soundex value for each customer as he's entered into the
database. She enters the following code for this trigger in SQL Query Analyzer:

CREATE TRIGGER trCustomerI


ON tblCustomer
FOR insert, update
as
UPDATE tblCustomer
SET tblCustomer.LastNameSoundex = soundex(tblCustomer.LastName)
FROM inserted
WHERE tblCustomer.ID = inserted.ID

Note
Although SQL Server 2000 allows definition of multiple triggers of the same type (Insert, Update, Delete) for a
single table, the order of their execution is not fully controllable. You can specify which is to be executed first
and which is to be executed last. To ensure that the preceding trigger is executed after all other Insert triggers
for tblCustomer (such as the one to assign a value to the ID column), the following line is executed in the Query
Analyzer after the trigger has been created:

sp_settriggerorder @triggername= 'trCustomerI', _ @order='last', @stmttype = 'INSERT'

The reason that this trigger seems a bit more complicated than it needs to be has to do with how triggers
are executed. The rule for triggers is that they're executed only once, even if the insert, update, or delete
that caused the trigger to execute is some kind of crazy batch process involving thousands of records. As a
result of this rule, the triggers that the database developer writes must be capable of handling a potentially
unlimited number of records.
The key to handling the appropriate set of records in a trigger is to perform an update based on all the
possible records that were changed by the procedure that caused the trigger to execute in the first place.
How does a trigger know which records were affected by this procedure? Triggers have access to this
information through virtual tables called inserted and deleted. The inserted virtual table contains the
record(s) inserted (or updated) by the procedure that launched the trigger; the deleted virtual table contains
the data deleted by the procedure that launched a delete trigger.
Because Jones's database developer is building both an insert trigger and an update trigger, referencing the
records in the inserted virtual table covers all the inserted or updated records, no matter how they were
inserted or updated. Every record is assured to have a soundex value generated for it by the trigger.
Now that she has a bulletproof trigger that creates a soundex value for any record in tblCustomer, she tests
it by inserting a record that fools a conventional query. Assuming that she will have a number of people
named Smith in her database, the following insert command should suffice:

insert into tblCustomer (FirstName, LastName)


values ('Abigail', 'Smythe')

She can confirm that the trigger created a soundex value for this record by immediately querying it back
from the database:

SELECT LastNameSoundex, LastName


FROM tblCustomer
WHERE LastName = 'Smythe'

Now that she's confirmed that her trigger works, she can create a stored procedure that takes advantage of
the LastNameSoundex column. This procedure takes a parameterthe last name of the person she's looking
forand returns all the people in tblCustomer whose names sound like the name for which she's looking. The
code to create the stored procedure is

CREATE PROC LastNameLookup


@name varchar(40)
as
SELECT * from tblCustomer
where soundex(@name) = LastNameSoundex

Finally, she's ready to retrieve records from the database, based on their soundex values. To do so, she
executes the LastNameLookup stored procedure:

LastNameLookup 'smith'

After executing this procedure, SQL Query Analyzer returns a result set consisting of every person whose last
name is similar to Smith in the database, including some Smythes, as shown in Figure 3.21 .
Figure 3.21. Result set returned by the LastNameLookup stored procedure

Managing Users and Security in SQL Server Enterprise Manager


One of the most important reasons for using SQL Server is to manage multiple users who are attempting to
access the same data at the same time. Although a number of problems arise from this situation (such as
users accessing privileged information and two users attempting to update the same record at the same
time), you can resolve many of them with server-side settings.
SQL Server's security features give you a great deal of flexibility in determining who gets access to data.
Each database can have its own set of users, and each user can have his own set of permissions. A
permission set gives a user the capability to access and change data and (if you choose) potentially create
and destroy database objects himself.
SQL Server's security features also enable you to assign roles to individual users to facilitate the assignment
of permissions. For example, you may choose to create a developer role that has permission to access all the
objects in the database, a manager role that has the capability to access sensitive information such as salary
and sales information about the company, and a user role for normal users without extraordinary
permissions in accessing the database. How you assign users to roles is up to you, but you should use roles
even in a simple database to make managing access easier.
Creating and Maintaining Logins and Users
A robust and flexible security system requires giving users their own identities. SQL Server gives you this
ability by letting you designate logins and users. A login represents an account (or human being) that has
access to your SQL Server. Logins are used to create users. Creating a user permits you to give a login

specific permissions for a particular database. You can also add users to roles to give them a broad group of
permissions all at once.
If the difference between a login and a user doesn't make sense to you, think about it this way: A login is
created at the server level; a user is created at the database level. In other words, you must have a login
before you can become a user of a particular database.
To begin creating a user, create a login for an individual.
1. In SQL Server Enterprise Manager, expand the Security folder for the SQL Server that you want to
work with. (Logins don't belong to any individual database; instead, they belong to the server itself.)
2. Right-click on the Logins node and select New Login from the pop-up menu.
3. The SQL Server Login Properties dialog box appears, as shown in Figure 3.22 .
Figure 3.22. SQL Server Login Properties dialog box for a New Login

4. If you're creating a new SQL Login, select the SQL Server Authentication option button and then type
the name that the person will use in the Login Name box. Optionally, enter a password for this Login.
5. If you're mapping a Windows NT/2000 account to a SQL Server account, select the Windows
Authentication option button and then click on the Name browse button to select the existing Windows
NT/2000 account.

Note
When using SQL Server Authentication, you should consider establishing a procedure in the
client applications you build, whereby a person can establish and change her password. You
can implement this procedure by using the sp_password stored procedure. When you're using

6.

Integrated Security, that isn't necessary, as the standard Windows NT/2000 password and
authentication mechanisms are used.
6. Set the default database for this login by selecting Novelty from the Database listbox.
7. You can add this login as a user to a particular database at this time. To do so, click on the Database
Access tab in the SQL Server Login Properties dialog box. Then check the databases to which access
should be granted for this login and check the roles that this login has when accessing the database, as
shown in Figure 3.23 .
Figure 3.23. Creating a new login and adding access to a database

8. Click on OK when you've finished assigning this login to users. The login is created, and any users you
created for the login are added to the appropriate database(s). It is displayed immediately in
Enterprise Manager's Servers console window, as shown in Figure 3.24 .
Figure 3.24. A new login and a new user displayed in the Server Manager window

Managing Roles with SQL Server Enterprise Manager


SQL Server 2000 uses roles to group users who have the same permissions. Any user added to a role
inherits the permissions of that role. And changing the permissions of a role changes the permissions of all
the users assigned to that role. That way, in order to add or revoke a large number of permissions for a
particular user, you simply change which role the user belongs to.
SQL Server 2000 has two types of rolesServer roles and Database roles. Server roles control access to
operations that affect the entire SQL Server, such as starting and stopping the server, configuring advanced
features such as replication, managing security, and creating databases. Database roles control access to
operations and data for a specific database.
To add a user to a Server role in Enterprise Manager, do the following.
1. In Enterprise Manager's Microsoft SQL Servers console, expand the Security folder for the server you
want to alter and select the Server Roles node. Doing so will display the fixed set of Server roles.
2. Right-click on the role that you want to alter and choose Properties from the pop-up menu.
Alternatively, you can just double-click on the role that you want to alter. The Server Role Properties
dialog box is displayed, as shown in Figure 3.25 .
Figure 3.25. The Server Role Properties dialog box for the Process Administrators Server
Role

3. To add a login to this role, click on the Add button and select the login(s) from the list of available
logins.
4. To remove a login from this role, select the login(s) from the list of logins who are currently members
of this role and then click on the Remove button.
5. Click on OK to close the Server Role Properties dialog box.

Tip
You can also add or remove a specific user from a Server role by using the Server Roles tab of the
SQL Server Login Properties dialog box, discussed previously in the Creating and Maintaining
Logins and Users section.

To add a user to a Database role in SQL Server Enterprise Manager, do the following.
1. In the Microsoft SQL Servers console, select the Roles node for the database that you want to modify.
Doing so will display the available Database roles.
2. Right-click on the role that you want to alter and choose Properties from the pop-up menu.
Alternatively, you can just double-click on the role that you want to alter. The Database Role Properties
dialog box is displayed, as shown in Figure 3.26 .
Figure 3.26. The Database Role Properties dialog box for the db_accessadmin role

3. To add a user to this role, click on the Add button and select the user(s) from the list of available users
for this database.
4. To remove a user from this role, select the user(s) from the list of users who are currently members of
this role and then click on the Remove button.
5. Click on OK to close the Database Role Properties dialog box.

Note
SQL Server 2000 also supports user-defined database roles, in addition to the fixed database roles
that we have discussed. These roles allow for customized access to data and operations for the
database (at this point the Permission button would be enabled). More information on userdefined database roles can be found in SQL Server Books Online.

Testing Security Features in SQL Query Analyzer


You might be curious to see what happens when a "civilian" user attempts to use database objects for which
he doesn't have permissions. Because SQL Query Analyzer lets you log in as any user, you can test its
security features by using its Query dialog as follows.
1. Verify that you did not check the db_owner checkbox for the user you created for the Novelty database
in Figure 3.23 .
2. Log out of SQL Query Analyzer by choosing the menu command File Disconnect or Disconnect All.
3. Use the menu command File Connect to log back into SQL Server through Query Analyzer. This time,
rather than logging in as sa, log in as the user you created in the preceding demonstration. Rather

4.

2.
3.
than starting in the master database as you do when you log in as sa, you're in the Novelty database
(or to what ever database your login defaults).
4. Now try to run a query on a table you don't have permission to access by executing

SELECT * from tblCustomer

SQL Server responds with

Msg 229, Level 14, State 1


SELECT permission denied on object tblCustomer, database Novelty, owner dbo

Note
The preceding lines (as well as the one in step 5) reflect the assumption that you're using the
Novelty database for the demonstrations. If you're using a different database, you may not be
able to make the appropriate modifications.
5. Now try executing a stored procedure by executing

LastNameLookup 'smith'

SQL Server responds by retrieving all the names that sound like Smith in tblCustomer.

Applying Security Attributes in SQL Query Analyzer


Operations related to database security can be performed in SQL Query Analyzer. You generally do so when
you want a database's security features to be created by the same SQL batch that creates the database. If
you always append the commands pertaining to security to the same SQL batch that creates the database
object, you're less likely to forget to apply security features to new database objects that you create.
To create a new SQL Server Authentication login using the Query Analyzer, use the sp_addlogin stored
procedure. For example, to create the login Frances, execute the command

sp_addlogin 'Frances'

To give the login Frances the password "stairmaster", add the password as an additional argument to the
sp_addlogin procedure:

sp_addlogin 'Frances', 'stairmaster'

If instead of adding a SQL Server login, you want to add a Windows login for an existing Windows account,
use the sp_grantlogin stored procedure, in a similar way. Note, however, that you can't specify a

password, as passwords aren't part of the definition of a Windows-type login in SQL Server (they are handled
by the standard Windows password mechanisms).
To make the login Frances a user of the Novelty database, use the sp_adduser procedure:

USE novelty
GO
sp_adduser 'Frances', 'Frannie'
GO

To show a list of all the logins and default databases in your SQL Server installation with the SQL Query
Analyzer, use the SQL command:

USE master
GO
SELECT name, dbname
from syslogins

To add a user to a Database role, use the sp_addrolemember stored procedure:

sp_addrolemember 'db_datawriter','Frannie'

You can display a list of all the Database roles in a database by using the stored procedure sp_helprole .
You apply and remove permissions for a particular database object by using the SQL Grant and Revoke
commands. The Grant command permits a user granted a particular role to have access to a database
object, whereas the Revoke command removes a permission. For example, to grant members of the public
role complete access to tblCustomer, use the SQL command:

GRANT ALL
on tblCustomer
to public

If instead, you want to restrict members of that group to selecting data from tblCustomer, qualify the Grant
command with the Select option:

GRANT SELECT
on tblCustomer
to public

To revoke permissions on a database object, use the Revoke command. For example, to revoke permission
to access tblCustomer from those previously granted the public role, use the SQL command:

REVOKE ALL
on tblCustomer
to public

You can also grant or revoke permissions for update, select, delete, and insert on tables and views. You can
further grant or revoke permissions to execute stored procedures.
Determining Who Is Logged In with sp_who
You have the capability to determine which users are logged into a database at any time by using the stored
procedure sp_who . It returns information on the users who are logged into the system and which
database(s) they're working in. To use sp_who , execute it in SQL Query Analyzer's Query window. SQL
Query Analyzer returns a list of current logins, such as shown in Figure 3.27 .
Figure 3.27. Results of running the sp_who stored procedure

Viewing a list of currently logged-in users gives you the ability to do a number of things, such as terminate
user sessions from the server, as described in the next section.
Ending a Process with the kill Command
In SQL Server, the system administrator has the ability to kill a processsuch as a user session or a
database lockwith the kill command. You generally do this when a user's session terminated abnormally
and you want to get rid of her hung session, or when a client procedure has initiated a lock on a piece of
data and won't let go. (These situations are rare, but they do happen, particularly in development.)

To use the kill command, you must first run the sp_who stored procedure (if you're trying to kill a user
session) or the sp_lock procedure (if you're trying to kill a database lock). Both procedures return a column
called spid, the procedure ID of the process. After you know the spid, you can kill the process by using the
kill command.
For example, say that you run sp_who and notice that there's a hung session, with an spid of 10, from a
user who you know for certain won the lottery three weeks ago and won't be returning to work. To kill spid
10, issue the following command in SQL Query Analyzer:

kill 10

The bogus session is immediately killed.

Note
It's a good idea to run sp_who periodically, particularly during development, just to see what's
going on in your database.

Removing Objects from the Database


The SQL Server term for removing an object from the database is to drop the object. When you drop an
object from the database, it is deleted permanently. For tables that contain data, both the structure and the
data are deleted permanently.
To drop a database objecta table, for examplein SQL Server Enterprise Manager, simply right-click on it.
In the pop-up menu, select Delete. The object is deleted.
To drop a database object in SQL Query Analyzer, use the drop command. For example, to drop the
tblCustomer table, use

DROP TABLE tblCustomer

Business Case 3.2: Generating an SQL Script That Creates a Database


To run the sample code in this book first requires running the script NoveltyDB.sql to create and set up the
Novelty database on your computer. (Did you look inside that script? Did you wonder how it was written?)
Because a database design may undergo many changes while it is being developed, the developer should
periodically create a script that documents the database and also allows her to create (and re-create) the
database automatically. Although the developer could have complete control and write such a script
manually, she is much better off using the SQL Server Enterprise Manager to generate the scriptat least as
the basis of a script that she may go ahead and modify. This approach is much faster and much less errorprone.

The database developer at Jones Novelties, Incorporated, has decided to take this approach to creating the
company's database. The script that she develops allows a simple installation of the database objects on a
computer that may not even have the database defined on it. In other words, the script does everything
necessary to create and install the database and its objects on a "virgin" computer, which is normally
required when a system is being installed at a new site. The database schema that comprises the Novelty
database is shown in Figure 3.28 .
Figure 3.28. The schema of the Jones Novelties, Incorporated, Novelty database

The script that creates the database will be executed in SQL Query Analyzer. To minimize error messages,
the script destroys any old database objects each time it's run. That is, whenever a change is made in the
design of the database in design mode, the entire database can be re-created from scratch by simply
executing the entire script again. This approach ensures that all the changes made in the database design
are applied to the database each time the developer rebuilds it.
The drawback of this technique, of course, is the fact that if the developer isn't careful, the batch will wipe
out all the tables in the databaseand the data contained in the tables. So the developer may want to
consider disabling or deleting such scripts from the system after putting the database server into production.
As always, backing up the database before making changes is essential, in case something goes wrong.
Let's now say that you're the database developer for Jones Novelties, Incorporated. To generate scripts, do
the following.
1. Open the SQL Server Enterprise Manager and right-click on the Novelty database.
2. Select All Tasks from the pop-up menu that is displayed and then select Generate SQL Scripts from the
nested menu shown. Doing so displays the Generate SQL Scripts dialog box.
3. Click on the Show All button to display all available objects for the selected database.
4.

2.
3.
4. You now have the option of selecting one or more objects to be scripted. Check the Script all objects
checkbox so that all database objects are selected, as shown in Figure 3.29 .
Figure 3.29. Scripting all objects in the Generate SQL Scripts dialog box

5. Select the Formatting tab. In addition to the default settings, check the Include descriptive headers in
the Script files checkbox.

Tip
Be sure to remember to check the Include descriptive headers in the Script files option on the
Formatting tab because it will automatically add a header line that includes the date and time
that the script was generated. You will be very grateful for having done so when you suddenly
find yourself with several versions of a script and you're not sure which one really is the latest
(or correct) version.
6. You could stop at this point, if you only wanted to generate the script for the database objects.
However, to generate the script needed to create the actual physical database as well, continue by
clicking on the Options tab.
7. Under the Security Scripting Options, check the Script database checkbox. You could also elect to
script the database users, roles, and logins, but for now assume that there is a separate administrative
task to handle that.
8. Under the Table Scripting Options, check the Script indexes, Script triggers, and Script PRIMARY keys,
FOREIGN keys, defaults, and check constraints checkboxes, as shown in Figure 3.30
Figure 3.30. Options tab for the Generate SQL Scripts dialog box

9. Click on OK to start the process. Doing so displays the standard Save As dialog, which is waiting for the
name and path of the file (with an .sql extension) to save the script to. Enter the path and filename for
saving the script and click on Save. When the scripting task has been completed, click on OK to dismiss
the dialog.

Note
The script generated on your machine may not be identical to the following script, owing to
specific machine, server, or database settings. However, you should have no trouble following
along.
The script shown is littered with GO statements to ensure that the preceding command(s) is(are)
executed before continuing with the script. You will also often see blocks of commands such as

SET QUOTED_IDENTIFIER OFF


GO
SET ANSI_NULLS ON
GO

and

SET QUOTED_IDENTIFIER ON
GO

SET ANSI_NULLS ON
GO

before the execution of some commands. They ensure that the database is correctly configured
(temporarily) for the required operation and then reset following execution of the command.

Although we introduce the script in pieces in this section, the idea behind writing an SQL script is to execute
it as one big procedure. Accordingly, when you're generating or writing real-world scripts, dump the whole
thing into the SQL Query Analyzer Query window (either by loading it from a text file or by copying and
pasting) and hit the Execute button to run the script. Alternatively, you can highlight specific sections or
create multiple scripts and execute them individually. When developing SQL scripts with SQL Query Analyzer,
you can edit the commands in the Query windows, test them by executing them and checking the results,
and then save them to a file when you're done.
The first thing that you need to do is to create the physical database. Listing 3.1 presents the script that will
do so.
Listing 3.1 Script to create the Novelty database

/****** Object: Database Novelty Script Date: 10-Jul-02 12:41:09 PM ******/


IF EXISTS (SELECT name FROM master.dbo.sysdatabases WHERE name = N'Novelty')
DROP DATABASE [Novelty]
GO
CREATE DATABASE [Novelty] ON (NAME = N'novelty_Data', FILENAME = N'c:\program files\
microsoft sql
server\mssql\data\Novelty_Data.mdf' , SIZE = 3, FILEGROWTH = 10%) LOG ON (NAME =
N'novelty_Log',
FILENAME = N'c:\program files\microsoft sql server\mssql\data\Novelty_Log.LDF' , SIZE = 3,
FILEGROWTH = 10%)
COLLATE Latin1_General_CI_AI
GO

Before trying to create any new object, the generated script will always check to see if the object exists, and
then drop (delete) it if it does exist. After checking/dropping the Novelty database, the script creates the
new database.

Note
If you're writing or customizing the database script, you can turn the EXISTS test around to
prevent your script from dropping a table that contains data. You typically do that in a production
database, as you don't want to inadvertently destroy a table that contains data. For a database
that's being developed, however, dropping the database unconditionally, if it exists, is usually
appropriate.

The physical data and log files are specified, along with original size and growth values. This line of code, in
which actual disk file paths and names are specified, is the one line of the script that you may very well want
to change before running it for a new installation.
The code in Listing 3.2 sets various database options. You can read about the meaning of each option by
looking it up in the SQL Server documentation (Books Online).
Listing 3.2 Script to set database options for Novelty database

exec sp_dboption N'Novelty', N'autoclose', N'false'


GO
exec sp_dboption N'Novelty', N'bulkcopy', N'false'
GO
exec sp_dboption N'Novelty', N'trunc. log', N'false'
GO
exec sp_dboption N'Novelty', N'torn page detection', N'true'
GO
exec sp_dboption N'Novelty', N'read only', N'false'
GO
exec sp_dboption N'Novelty', N'dbo use', N'false'
GO
exec sp_dboption N'Novelty', N'single', N'false'
GO
exec sp_dboption N'Novelty', N'autoshrink', N'false'
GO
exec sp_dboption N'Novelty', N'ANSI null default', N'false'
GO
exec sp_dboption N'Novelty', N'recursive triggers', N'false'
GO
exec sp_dboption N'Novelty', N'ANSI nulls', N'false'
GO
exec sp_dboption N'Novelty', N'concat null yields null', N'false'
GO
exec sp_dboption N'Novelty', N'cursor close on commit', N'false'
GO
exec sp_dboption N'Novelty', N'default to local cursor', N'false'

GO
exec sp_dboption N'Novelty', N'quoted identifier', N'false'
GO
exec sp_dboption N'Novelty', N'ANSI warnings', N'false'
GO
exec sp_dboption N'Novelty', N'auto create statistics', N'true'
GO
exec sp_dboption N'Novelty', N'auto update statistics', N'true'
GO

Now that the database has been created, you can go ahead and use itthat is, run commands against it. If
you wanted to execute the remaining schema creation commands against a different database (for example,
for testing on the same server), you could simply specify a different database in the use command:

USE [NoveltyTest]
GO

Before proceeding to the object creation commands, the script checks for and deletes all existing objects
that it intends to create. They include constraints, triggers, stored procedures, views, and tables. This order
is significant because a table can't be dropped if any of its associated objects still exist. The code for doing
this task is shown in Listing 3.3 .
Listing 3.3 Script to delete existing objects in the Novelty database

if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].


[FK_tblOrder_tblCustomer]') and OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblOrder] DROP CONSTRAINT FK_tblOrder_tblCustomer
GO
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[FK_ tblEm
ployee_tblDepartment]') and OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblEmployee] DROP CONSTRAINT FK_tblEmployee_tblDepartment
GO
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[FK_
tblOrderItem_tblInventory]') and OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblOrderItem] DROP CONSTRAINT FK_tblOrderItem_tblInventory
GO
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[FK_
tblOrderItem_tblOrder]') and OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblOrderItem] DROP CONSTRAINT FK_tblOrderItem_tblOrder
GO

if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[FK_


tblCustomer_tblRegion]') and OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblCustomer] DROP CONSTRAINT FK_tblCustomer_tblRegion
GO
/****** Object: Trigger dbo.trCustomerI Script Date: 10-Jul-02 12:41:09 PM ******/
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[trCustomerI]') and
OBJECTPROPERTY(id, N'IsTrigger') = 1)
drop trigger [dbo].[trCustomerI]
GO
/****** Object: Stored Procedure dbo.DeleteEmployee Script Date: 10-Jul-02 12:41:09 PM
******/
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[DeleteEmployee]')
and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[DeleteEmployee]
GO
/****** Object: Stored Procedure dbo.GetCustomerFromID Script Date: 10-Jul-02 12:41:09 PM
******/
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].
[GetCustomerFromID]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[GetCustomerFromID]
GO
/****** Object: Stored Procedure dbo.InsertEmployee Script Date: 10-Jul-02 12:41:09 PM
******/
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[InsertEmployee]')
and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[InsertEmployee]
GO
/****** Object: Stored Procedure dbo.InsertEmployeeOrg Script Date: 10-Jul-02 12:41:09 PM
******/
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].
[InsertEmployeeOrg]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[InsertEmployeeOrg]
GO
/****** Object: Stored Procedure dbo.LastNameLookup Script Date: 10-Jul-02 12:41:09 PM
******/
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[LastNameLookup]')
and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[LastNameLookup]
GO
/****** Object: Stored Procedure dbo.SelectEmployees Script Date: 10-Jul-02 12:41:09 PM
******/
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[SelectEmployees]')
and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[SelectEmployees]
GO

/****** Object: Stored Procedure dbo.UpdateEmployee Script Date: 10-Jul-02 12:41:09 PM


******/
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[UpdateEmployee]')
and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[UpdateEmployee]
GO
/****** Object: Stored Procedure dbo.procEmployeesSorted Script Date: 10-Jul-02 12:41:09
PM ******/
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].
[procEmployeesSorted]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[procEmployeesSorted]
GO
/****** Object: View dbo.EmployeeDepartment_view Script Date: 10-Jul-02 12:41:09 PM
******/
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].
[EmployeeDepartment_view]') and OBJECTPROPERTY(id, N'IsView') = 1)
drop view [dbo].[EmployeeDepartment_view]
GO
/****** Object: View dbo.qryEmployee_view Script Date: 10-Jul-02 12:41:09 PM
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[qry
Employee_view]') and OBJECTPROPERTY(id, N'IsView') = 1)
drop view [dbo].[qryEmployee_view]
GO

******/

/****** Object: Table [dbo].[tblCustomer] Script Date: 10-Jul-02 12:41:09 PM ******/


if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[tblCustomer]') and
OBJECTPROPERTY(id, N'IsUserTable') = 1)
drop table [dbo].[tblCustomer]
GO
/****** Object: Table [dbo].[tblDepartment] Script Date: 10-Jul-02 12:41:09 PM ******/
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[tblDepartment]')
and OBJECTPROPERTY(id, N'IsUserTable') = 1)
drop table [dbo].[tblDepartment]
GO
/****** Object: Table [dbo].[tblEmployee] Script Date: 10-Jul-02 12:41:09 PM ******/
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[tblEmployee]') and
OBJECTPROPERTY(id, N'IsUserTable') = 1)
drop table [dbo].[tblEmployee]
GO
/****** Object: Table [dbo].[tblInventory] Script Date: 10-Jul-02 12:41:09 PM ******/
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[tblInventory]') and
OBJECTPROPERTY(id, N'IsUserTable') = 1)
drop table [dbo].[tblInventory]
GO

/****** Object: Table [dbo].[tblOrder] Script Date: 10-Jul-02 12:41:09 PM ******/


if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[tblOrder]') and
OBJECTPROPERTY(id, N'IsUserTable') = 1)
drop table [dbo].[tblOrder]
GO
/****** Object: Table [dbo].[tblOrderItem] Script Date: 10-Jul-02 12:41:09 PM ******/
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[tblOrderItem]') and
OBJECTPROPERTY(id, N'IsUserTable') = 1)
drop table [dbo].[tblOrderItem]
GO
/****** Object: Table [dbo].[tblRegion] Script Date: 10-Jul-02 12:41:09 PM ******/
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[tblRegion]') and
OBJECTPROPERTY(id, N'IsUserTable') = 1)
drop table [dbo].[tblRegion]
GO

The script now goes ahead and creates the new database objects. First the tables are created, as shown in
Listing 3.4 .
Listing 3.4 Script to create tables in the Novelty database

/****** Object: Table [dbo].[tblCustomer] Script Date: 10-Jul-02 12:41:10 PM ******/


CREATE TABLE [dbo].[tblCustomer] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[FirstName] [varchar] (20) COLLATE Latin1_General_CI_AI NULL ,
[LastName] [varchar] (30) COLLATE Latin1_General_CI_AI NULL ,
[Company] [varchar] (50) COLLATE Latin1_General_CI_AI NULL ,
[Address] [varchar] (50) COLLATE Latin1_General_CI_AI NULL ,
[City] [varchar] (30) COLLATE Latin1_General_CI_AI NULL ,
[State] [char] (2) COLLATE Latin1_General_CI_AI NULL ,
[PostalCode] [varchar] (9) COLLATE Latin1_General_CI_AI NULL ,
[Phone] [varchar] (15) COLLATE Latin1_General_CI_AI NULL ,
[Fax] [varchar] (15) COLLATE Latin1_General_CI_AI NULL ,
[Email] [varchar] (100) COLLATE Latin1_General_CI_AI NULL ,
[LastNameSoundex] [varchar] (4) COLLATE Latin1_General_CI_AI NULL
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[tblDepartment] Script Date: 10-Jul-02 12:41:11 PM ******/
CREATE TABLE [dbo].[tblDepartment] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[DepartmentName] [varchar] (75) COLLATE Latin1_General_CI_AI NOT NULL)
ON [PRIMARY]
GO
/****** Object: Table [dbo].[tblEmployee] Script Date: 10-Jul-02 12:41:11 PM ******/
CREATE TABLE [dbo].[tblEmployee] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,

[FirstName] [varchar] (50) COLLATE Latin1_General_CI_AI NOT NULL ,


[LastName] [varchar] (70) COLLATE Latin1_General_CI_AI NOT NULL ,
[DepartmentID] [int] NULL ,
[Salary] [money] NULL
ON [PRIMARY]
GO
/****** Object: Table [dbo].[tblInventory] Script Date: 10-Jul-02 12:41:11 PM
CREATE TABLE [dbo].[tblInventory] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[ProductName] [varchar] (75) COLLATE Latin1_General_CI_AI NOT NULL ,
[WholesalePrice] [money] NULL ,
[RetailPrice] [money] NULL ,
[Description] [ntext] COLLATE Latin1_General_CI_AI NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
/****** Object: Table [dbo].[tblOrder] Script Date: 10-Jul-02 12:41:12 PM
CREATE TABLE [dbo].[tblOrder] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[CustomerID] [int] NULL ,
[OrderDate] [datetime] NULL ,
[Amount] [money] NULL
) ON [PRIMARY]
GO

******/

******/

/****** Object: Table [dbo].[tblOrderItem] Script Date: 10-Jul-02 12:41:12 PM ******/


CREATE TABLE [dbo].[tblOrderItem] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[OrderID] [int] NOT NULL ,
[ItemID] [int] NOT NULL ,
[Quantity] [int] NULL ,
[Cost] [money] NULL
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[tblRegion] Script Date: 10-Jul-02 12:41:12 PM ******/
CREATE TABLE [dbo].[tblRegion] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[State] [char] (2) COLLATE Latin1_General_CI_AI NOT NULL ,
[RegionName] [varchar] (25) COLLATE Latin1_General_CI_AI NULL
) ON [PRIMARY]
GO

Then the constraints are created, as shown in Listing 3.5 .


Listing 3.5 Script to create constraints in the Novelty database

ALTER TABLE [dbo].[tblCustomer] WITH NOCHECK ADD


CONSTRAINT [PK_tblCustomer] PRIMARY KEY CLUSTERED

(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[tblDepartment] WITH NOCHECK ADD
CONSTRAINT [tblDepartment_IDPK] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[tblEmployee] WITH NOCHECK ADD
CONSTRAINT [PK_tblEmployee] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[tblInventory] WITH NOCHECK ADD
CONSTRAINT [PK_tblInventory] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[tblOrder] WITH NOCHECK ADD
CONSTRAINT [PK_tblOrder] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[tblOrderItem] WITH NOCHECK ADD
CONSTRAINT [PK_tblOrderItem] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[tblRegion] WITH NOCHECK ADD
CONSTRAINT [PK_tblRegion] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
CREATE UNIQUE INDEX [IX_tblRegion] ON
[bo].[tblRegion]([State]) ON [PRIMARY]
GO
ALTER TABLE [dbo].[tblCustomer] ADD

CONSTRAINT [FK_tblCustomer_tblRegion] FOREIGN KEY


(
[State]
) REFERENCES [dbo].[tblRegion] (
[State]
) ON DELETE CASCADE ON UPDATE CASCADE
GO
ALTER TABLE [dbo].[tblEmployee] ADD
CONSTRAINT [FK_tblEmployee_tblDepartment] FOREIGN KEY
(
[DepartmentID]
) REFERENCES [dbo].[tblDepartment] (
[ID]
) ON DELETE CASCADE ON UPDATE CASCADE
GO
ALTER TABLE [dbo].[tblOrder] ADD
CONSTRAINT [FK_tblOrder_tblCustomer] FOREIGN KEY
(
[CustomerID]
) REFERENCES [dbo].[tblCustomer] (
[ID]
) ON DELETE CASCADE ON UPDATE CASCADE
GO
ALTER TABLE [dbo].[tblOrderItem] ADD
CONSTRAINT [FK_tblOrderItem_tblInventory] FOREIGN KEY
(
[ItemID]
) REFERENCES [dbo].[tblInventory] (
[ID]
) ON DELETE CASCADE ON UPDATE CASCADE ,
CONSTRAINT [FK_tblOrderItem_tblOrder] FOREIGN KEY
(
[OrderID]
) REFERENCES [dbo].[tblOrder] (
[ID]
) ON DELETE CASCADE ON UPDATE CASCADE
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO

Finally, the views, stored procedures, and triggers are created, as shown in Listing 3.6 .
Listing 3.6 Script to create views, stored procedures, and triggers in the Novelty database

/****** Object: View dbo.EmployeeDepartment_view Script Date: 10-Jul-02 12:41:13 PM


******/
CREATE view EmployeeDepartment_view
as
select e.ID, FirstName, LastName, DepartmentName
from tblEmployee e, tblDepartment t
where e.DepartmentID = t.ID
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
/****** Object: View dbo.qryEmployee_view Script Date: 10-Jul-02 12:41:13 PM ******/
create view qryEmployee_view as
SELECT ID, FirstName, LastName, DepartmentID
FROM tblEmployee
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
/****** Object: Stored Procedure dbo.DeleteEmployee Script Date: 10-Jul-02 12:41:13 PM
******/
CREATE PROCEDURE dbo.DeleteEmployee
(
@Original_ID int
)
AS
SET NOCOUNT OFF;
DELETE FROM tblEmployee WHERE (ID = @Original_ID)
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON

GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
/****** Object: Stored Procedure dbo.GetCustomerFromID Script Date: 10-Jul-02 12:41:13 PM
******/
create procedure GetCustomerFromID
@custID int
as
select * from tblCustomer
where ID = @custID

GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS OFF
GO
/****** Object: Stored Procedure dbo.InsertEmployee Script Date: 10-Jul-02 12:41:13 PM
******/
CREATE PROCEDURE dbo.InsertEmployee
(
@FirstName varchar(50),
@LastName varchar(70),
@DepartmentID int,
@Salary money
)
AS
SET NOCOUNT OFF;

if (@Salary = 0 or @Salary is null)


begin
Do complicated salary calculations
set @Salary = @DepartmentID * 10000
end
INSERT INTO tblEmployee(FirstName, LastName, DepartmentID, Salary) VALUES
(@FirstName, @LastName, @DepartmentID, @Salary)
GO
SET QUOTED_IDENTIFIER OFF

GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
/****** Object: Stored Procedure dbo.InsertEmployeeOrg Script Date: 10-Jul-02 12:41:13 PM
******/
CREATE PROCEDURE dbo.InsertEmployeeOrg
(
@FirstName varchar(50),
@LastName varchar(70),
@DepartmentID int,
@Salary money
)
AS
SET NOCOUNT OFF;
INSERT INTO tblEmployee(FirstName, LastName, DepartmentID, Salary) VALUES (@FirstName,
@LastName, @DepartmentID, @Salary)
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
/****** Object: Stored Procedure dbo.LastNameLookup Script Date: 10-Jul-02 12:41:13 PM
******/
create proc LastNameLookup
@name varchar(40)
as
select * from tblCustomer
where soundex(@name) = LastNameSoundex
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON

GO
/****** Object: Stored Procedure dbo.SelectEmployees Script Date: 10-Jul-02 12:41:13 PM
******/
CREATE PROCEDURE dbo.SelectEmployees
AS
SET NOCOUNT ON;
SELECT FirstName, LastName, DepartmentID, Salary, ID FROM tblEmployee
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
/****** Object: Stored Procedure dbo.UpdateEmployee Script Date: 10-Jul-02 12:41:13 PM
******/
CREATE PROCEDURE dbo.UpdateEmployee
(
@FirstName varchar(50),
@LastName varchar(70),
@DepartmentID int,
@Salary money,
@Original_ID int
)
AS
SET NOCOUNT OFF;
UPDATE tblEmployee SET FirstName = @FirstName, LastName = @LastName, DepartmentID =
@DepartmentID, Salary = @Salary WHERE (ID = @Original_ID)
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS OFF
GO
/****** Object: Stored Procedure dbo.procEmployeesSorted Script Date: 10-Jul-02 12:41:13
PM ******/
CREATE PROCEDURE procEmployeesSorted AS
select * from tblEmployee
order by LastName, FirstName
return

GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
/****** Object: Trigger dbo.trCustomerI Script Date: 10-Jul-02 12:41:14 PM ******/
create trigger trCustomerI
on dbo.tblCustomer
for insert, update
as
update tblCustomer
set tblCustomer.LastNameSoundex = soundex(tblCustomer.LastName)
from inserted
where tblCustomer.ID = inserted.ID
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO

Although the script that was automatically generated by the Enterprise Manager is certainly good enough to
run "as is," you should feel free to modify it as you desire. Just be sure to remember that if you re generate
the script, all your manual changes will be lost.
One useful modification would be to include a Print command in strategic places in the script, to display
some text to the SQL Query Analyzer's Messages window. Doing so gives a visible indication of the script's
progress. You can also use the printed output as a debugging tool that can help you determine where errors
in your batch might be. Using the Print command is optional and has no direct bearing on the creation of
the database.
Remember, when you're using batches such as this one, feel free to run and rerun them whenever you want.
This batch is written in such a way that it completely destroys and re-creates the database when it is
executed. If you load sample data into your database during testing, you don't have to worry about that data
inadvertently hanging around when you put your database into production mode. In addition, creating a
database from a batch lets you easily migrate your database design to multiple servers. That enables you to
have two physically distinct database serversone for development, and another for production.

[ Team LiB ]

[ Team LiB ]

Summary
In this chapter we presented the basics for getting started doing distributed applications with Microsoft SQL
Server. Bear in mind that, although we focused here on how to configure and use Microsoft SQL Server
2000, the material in the other chapters in this book are applicable to any database systemOracle, Sybase,
Informix, or what ever. So long as there is an ODBC driver or an OLE DB provider that can get to your backend data, you can use the database from Visual Basic.NET.

Questions and Answers

Q1:

I've always been terrified of fooling around with SQL Server. It always seemed like a
black art to me. I once knew a guy whose brain exploded after doing a week of
constant server-side programming. Will the topics covered in this chapter enable me to
create serious database applications with SQL Server without going crazy?!

A1:

Yes and no. We didn't design this chapter to cover hard-core, day-to-day database
administration, performance tweaking, or anything like that. And it's definitely not designed to
be a comprehensive guide to SQL Server, just an introduction. The material that covered getting
started with SQL Server in the first half of this chapter was designed specifically to let you get
comfortable with SQL Server. Migrating up from single-user and small-workgroup computing to
client-server isn't trivial, but it shouldn't be a black art, either. That fear is what this chapter is
designed to dispel. (As to what is happening to your friend's head, that's between him and his
psychiatrist.)

Q2:

If most of my queries are pretty straightforward and do not contain complicated logic,
is there any reason for me to get involved with using stored procedures?

A2:

Yes. In fact, there are two main advantages to using stored procedures instead of coding your
SQL queries in your application code:

1. Performance. For many programmers, this alone is enough of a reason to use stored
procedures! The improved performance of a query that is implemented in a stored
procedure rather than in the application code is due to the fact that a stored procedure is
already precompiled and planned by SQL Server before it is called to be executed. When an
SQL query string is passed from the client to the SQL Server to be executed, it must first
be parsed and compiled, and have an execution path determined, before it can actually be
executed. That is a lot of overhead to be paid at run-time, when you are trying to squeeze
out as much performance as possible.
2. Manageability. Implementing queries as stored procedures means that all of an
application queries are stored in a single, central location, rather than strewn throughout
the thousands and tens of thousands of line of application code. Moreover, it allows
multiple projects or applications to utilize the same code if they are using the same
database. This means less work (coding/debugging/testing) and less bugs. It also allows us
to leverage the advanced security control mechanisms offered by SQL Server. Finally, using

stored procedures offers the option of "divide and conquer" or specialization, for the
development of the application code. The application developers, who specialize in the
business logic and flow of the application can focus on their application code, while leaving
the database access and querying to the database gurus working on the server.

[ Team LiB ]

[ Team LiB ]

Chapter 4. ADO.NETData Providers


IN THIS CHAPTER

Overview of ADO.NET
Overview of .NET Data Provider Objects
The Connection Object
The Command Object
The DataReader Object
Using the Connection and Command Design-Time Components
Other Data Provider Objects
Sometimes it seems that every time database developers turn around, Microsoft has a new and different
data model for them to use to access databases. In this chapter we focus on the newest
incarnationADO.NET. We begin with an explanation (or our opinion) of the reason for this new database
access model and whether it is, in fact, justified. We then provide an overview of the model and its
architecture as a whole.
Our purpose here is to lay the groundwork for working with ADO.NET. In doing so we discuss its basic
operations and take an in-depth look at some of the basic objects of an ADO.NET data providerthe
Connection, Command , Parameter, and DataReader objects. In Chapters 5, 6 and 7 we take a
comprehensive look at the more advanced and exciting objects that ADO.NET provides, all of which revolve
around the DataSet the central ADO.NET object.

[ Team LiB ]

[ Team LiB ]

Overview of ADO.NET
If you've been developing database applications with Visual Basic for a while, you have gotten used to the
fact that every several years Microsoft comes out with a new and improved data access model. In addition to
a new TLA (three-letter acronym), there is a new API/Object model to learn and master. In recent years
developers have been through ODBC, DAO, RDO, and ADO, before getting to today's ADO.NET. With the
release of each new technology, you need to study its goals and design and then ask the question: Does it
make sense for me and my team to move to this new technology? In most cases, the answer has been yes,
unless you were working on a project whose current and future requirements were unaffected by the new
features and capabilities offered. More often than not, that wasn't the case, although the availability of RDO
(Remote Data Objects) was indeed irrelevant to projects that were committed to using the JET (MDB)
database engine (DAO is still the better choice for MDB access).

Motivation and Philosophy


So, let's ask the $64,000 question: Why do we need another new data access object model? The simple
answer is the one given by an old car commercial in the late 1970s: "You asked for ityou got it (Toyota)!"
That is, ADO.NET brings together many of the things that developers have been clamoring for since the
release of ADO several years ago. Yes, some things, such as disconnected record sets and XML support,
were added to ADO over the years. The problem is that they are just thatadd-ons. They are afterthoughts
to the original design and therefore often incomplete and awkward to use.
Classic (COM-based) ADO has a single object, the recordset, that can be used for myriad purposes.
Depending on various configuration propertiessuch as cursor type, cursor location, and locking typethe
recordset will behave differently and fulfill different purposes.
In ADO.NET, the different functionality and purpose are divided into separate objects, which can then be
used individually or in conjunction with each other. This capability allows developers to use objects that are
optimized for specific purposes and that do not carry any additional distracting "baggage." At the same time,
the various objects are designed to work seamlessly with each other, in order to easily provide more
advanced functionality.

Support for Distributed Applications and Disconnected Programming Model


ADO.NET provides excellent and flexible support for developing applications that are distributed across
multiple computers (database servers, application servers, and client workstations). In particular, super
support is provided for disconnected (or three- or n-Tier) applications, where the concurrent load and locking
of resources on the database server is minimized. The result is greater scalabilitythe ability to support a
greater number of concurrent users by incrementally adding additional hardware. This advantage is
particularly crucial when developing Web applications.

Extensive XML Support


Although classic ADO is able to save and read data in XML, the actual format is somewhat unusual and not

easy to work with. In addition, as XML support was added to ADO rather late in its evolution, the degree of
support is somewhat limited and inflexible. In contrast, for ADO.NET, XML support is an essential design
feature put in from the beginning. The ADO.NET philosophy is that "Data is Data"it doesn't matter where
data comes from, it can be accessed and processed as relational data or hierarchical data as desired,
depending on a particular need or desired tool.
Moreover, XML is used as the transmission format to pass data between tiers and/or computers. That not
only eliminates the problem of having to allow COM calls through firewalls, but it further allows data to be
shared with applications running on non-Windows platforms (because everybody can process text-based
XML).

Integration with .NET Framework


ADO.NET isn't simply the next version of ADO. It was specifically designed and implemented to be part of the
.NET Framework. That is, it runs as Managed Code, and all its objects are designed and work as you would
expect a .NET object to work. It is also part of the standard .NET Framework package, thereby avoiding the
versioning issues that experienced ADO developers have faced in the past.

ADO Look and Feel


Although ADO.NET has been designed and implemented specifically to be part of the .NET Framework, much
of it should still seem very familiar to developers experienced in classic ADO. Even for features that are new
or implemented differently in ADO.NET, the experienced ADO developer will be able to use to leverage his
current knowledge to quickly understand and take advantage of the ADO.NET objects.

ADO.NET Versus Classic ADO (ADO 2.X)


When trying to get a handle on the differences between ADO.NET and classic ADO, bear in mind the
following.

Classic ADO is designed for connected access and is tied to the physical data model. In contrast,
ADO.NET is designed for disconnected access and can model data logically.
In ADO.NET there is a clear distinction and separation between the connected data access model and
the disconnected programming model.
There are no CursorType, CursorLocation, or LockType properties in ADO.NET. It contains only
static, client-side cursors and optimistic locking.
Rather than having a single, multipurpose object, ADO.NET splits the functionality of the classic ADO
recordset into smaller, specific objects such as DataReader, DataSet , and DataTable.
ADO.NET allows manipulation of XML data, rather than just using XML as an I/O format.
ADO.NET provides for strongly typed DataSets, rather than having all fields being of type Variant .
This feature allows greater design-type error detection and greater run-time performance.

ADO.NET Objects Within the .NET Framework


Figure 4.1 shows how the ADO.NET classes fit into the overall .NET Framework. At the bottom is the
Common Language Runtime (CLR), which is the run-time infrastructure for all .NET applications (regardless
of the language they are written in). Typical functionality provided by the CLR includes a common type
system, memory management, and object lifetime management.
Figure 4.1. ADO.NET classes within the .NET Framework

The next logical layer, which builds upon the support of the CLR, is the set of system base classes. These are
the classes that provide rich functionality to be utilized by .NET applications. Figure 4.1 shows some, but not
all, of the classes in the .NET Framework library. In effect, it is the new Windows API (Application
Programming Interface). In the past, the way to access the functionality of the Windows operating system
was through the Windows API, which consisted of a very large set of inconsistently designed function calls.
With .NET, the way to access this functionality (along with new functionality) is by using the properties and
methods exposed by the system base classes. This approach is a more object-oriented, consistent, and
comfortable way to develop Windows programs, regardless of whether they are desktop, browser, or Web
Service applications.
This layer contains several of the namespaces (groups of classes and other definitions) related to data
access: System.Data, System.Data.OleDb , and System.Data.SqlClient. In the remainder of this
chapterand in Chapters 5, 6 and 7we take a closer look at many of the classes and definitions in these
namespaces.

Application Interfaces
The top level is where a split, or differentiation, exists between different types of applications that
developers can build. There are classes and controls for building (classical) forms-based Windows
applications (Windows Forms), other classes and controls for building browser-based Web applications (Web
Forms), and classes for building Web Services applications. However, all involve the use of a common library
of classes for the application logicthe system base classes.
Now that you have a sense of where the ADO.NET classes fit into the overall scheme of the .NET Framework,
let's take a closer look at the main ADO.NET objects.

[ Team LiB ]

[ Team LiB ]

Overview of .NET Data Provider Objects


Despite the emphasis on the disconnected model of programming, you still need to connect to the physical
database to actually retrieve, update, insert, and/or delete data from the database. The software that
connects and communicates with the physical database in ADO.NET is called a .NET Data Provider. A data
provider is the .NET managed code equivalent of an OLEDB provider or ODBC driver. A data provider consists
of several objects that implement the required functionality, as defined by the classes and interfaces from
which they are derived.
Currently, three different ADO.NET Data Providers are available, each defined within its own namespace. The
prefixes used for objects in these namespaces are OleDb, Sql, and Odbc, respectively. When referring to
these objects in a generic sense, we use the object name, without any prefix.

SqlClient
The SqlClient data provider is optimized to work with SQL Server 7.0 or higher. It achieves greater
performance because (1) it communicates with the database directly through its native Tabular Data Stream
(TDS) protocol, rather than through OLEDB, which needs to map the OLEDB interface to the TDS protocol;
(2) the overhead of COM interoperability services are eliminated; and (3) there is no excess bloat of
functionality that isn't supported by SQL Server. The objects for this provider are contained in the
System.Data.SqlClient namespace.

Oledb
The Oledb data provider utilizes an existing native (COM) OLEDB provider and the .NET COM interoperability
ser vices to access the database. This data provider is the one to use if you aren't accessing an SQL Server
7.0 or higher database. It allows you to access any database for which you have an OLEDB provider. The
objects for this provider are contained in the System.Data.OleDb namespace.

Odbc
The Odbc data provider is the one to use when you're accessing databases that don't have their own .NET
Data Provider or a (COM) OLEDB provider. Also for a given database, the ODBC driver may provide better
performance than the OLEDB driver, so you may want to perform some tests to determine whether that's the
case for your application. The objects for this provider are contained in the Microsoft.Data.Odbc
namespace.

Note
Development of the Data Provider for ODBC lagged a bit behind the rest of the .NET Framework
and Visual Studio.NET. Thus it wasn't included in the original Visual Studio.NET release, but you
can download it from the Microsoft Web site. Also, be on the lookout for additional .NET data

providers that will become available in the future.


Currently, an Oracle .NET Data Provider is also available from the Microsoft Web site. The
downloadable ODBC and Oracle Data Providers will be incorporated into version 1.1 of the .NET
Framework, which will ship together with Visual Studio.NET 2003. As a result, the namespace for
the ODBC provider will change from

Microsoft.Data.Odbc

to

System.Data.Odbc

The example in this chapter features the version 1.0 ODBC provider. If you're already using
version 1.1, be sure to make the change to the namespace, as just described.

Core Objects
Each data provider comprises the four core objects listed in Table 4.1.

Table 4.1. Core Data Provider Objects


Object

Brief Description

Connection Establishes a connection to a specific data source.


Command

Executes a command at a data source. Exposes a collection of Parameter objects and


methods for executing different types of commands.

DataReader Reads and returns a forward-only, read-only stream of data from a data source.
DataAdapter Bridges a DataSet and a data source to retrieve and save data.
Each object is derived from a generic base class and implements generic interfaces but provides its own
specific implementation. For example, SqlDataAdapter, OleDbDataAdapter, and OdbcDataAdapter are
all derived from the DbDataAdapter class and implement the same interfaces. Each one, however, will
implement them specifically for its respective data source.
The System.Data.OleDb namespace includes the following objects:

OleDbConnection
OleDbCommand
OleDbDataReader

OleDbDataAdapter
Similarly, the System.Data.SqlClient namespace includes the following objects:

SqlConnection
SqlCommand
SqlDataReader
SqlDataAdapter
And the Microsoft.Data.Odbc namespace includes the following objects:

OdbcConnection
OdbcCommand
OdbcDataReader
OdbcDataAdapter
In the same way, all future or additional data providers will have their own namespaces and prefixes and
implement the required objects appropriately.

[ Team LiB ]

[ Team LiB ]

The Connection Object


The ADO.NET Connection object is very similar to the Connection object that you know and love from
classic ADO. Its purpose is straightforwardto establish a connection to a specific data source, with a
particular user account and password, as specified by a connection string. You can customize the connection
by specifying other parameters and values in the connection string. A Command object (or a DataAdapter)
can then use this connection to perform desired operations against the data source.

Note
Unlike the ADO 2.X Connection object, the ADO.NET Connection doesn't have Execute or
OpenSchema methods. The ability to execute SQL commands is available only through the
Command or DataAdapter objects. The functionality of the OpenSchema method is available by
means of the GetOleDbSchemaTable method of the OleDbConnection object.

Although the derived objects OleDbConnection, SqlConnection, and OdbcConnection all implement the
same interfaces, there are still differences among them. For example, the connection string formats are not
the same. The format for the OleDbConnection is designed to match the standard OLEDB connection string
format with only minor exceptions. The format for the OdbcConnection is designed to closely match that of
a standard ODBC connection string, but it contains some deviations. The connection string format for the
SqlConnection is different from both of the others, as it contains only parameters relevant to SQL Server
7.0 and higher.
Furthermore, some objects will add additional properties. For example, the OleDbConnection has a
Provider property to specify the OLEDB provider to be used and the OdbcConnection has a Driver
property to specify the ODBC driver to be used. The SqlConnection has neither of these properties
because the data source type is predetermined (SQL Server). However, the SqlConnection has the
PacketSize and WorkstationID properties, which are specific to SQL Server and not supported by the
other two types of connections.
Okay, let's finally start writing some code! We lead you through each of the core data provider objects in
simple, concrete steps. We start with the following simple example and develop it as we go through the
chapter.

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

Launch Visual Studio.NET.


Create a new Visual Basic Windows Application project.
Name the project DataProviderObjects.
Specify a path for where you want the project files to be saved.
Enlarge the size of Form1.
In the Properties window for Form1, set its Text property to Data Provider Objects.
In the upper-left-hand corner of the form, add a button from the Windows Forms tab of the Toolbox.
In the Properties window, set the Name property of the button to cmdConnection and set the Text
property to Connection.

7.
8.
9. From the Windows Forms tab of the Toolbox, add a textbox to Form1 and place it on the right side of
the form.
10. In the Properties window, set the Name property of the textbox to txtResults, the Multiline property
to True, and the ScrollBars property to Both.
11. Enlarge the textbox so that it covers about 80 percent of the area of the form.
When you've finished, your form should look something like that shown in Figure 4.2.
Figure 4.2. Form1 of the DataProviderObjects sample project

Switch to the code view of the form and add the following lines of code at the top of the file. Doing so
imports the namespaces you'll use as you develop the sample application throughout this chapter:

Imports
Imports
Imports
Imports

System.Data
System.Data.SqlClient
System.Data.OleDb
Microsoft.Data.Odbc

Note the namespace for the generic ADO.NET classes and definitions and the separate namespace for each
data provider.

Note
The Visual Studio editor may not recognize the Microsoft.Data.Odbc namespace, as it is
actually an add-on to the base product release. If that's the case, do the following.

1.
2.

1. Download the Odbc data provider installation file from the Microsoft Web site and follow the
instructions to install it on your computer.
2. In the Solution Explorer, right-click on the References node for the DataProviderObjects
project.
3. Select Add Reference from the pop-up menu that is displayed.
4. On the .NET tab of the Add Reference dialog box, scroll through the list of components until
you see Microsoft.Data.Odbc.dll.
5. Double-click on the Microsoft.Data.Odbc.dll list item to add it to the Selected Components
list at the bottom of the dialog.
6. Click on the OK button to close the dialog box.
If, for some reason, one of the other imported namespaces isn't recognized, you'll need to add a
reference to System.Data.dll. Follow steps 26, substituting System.Data.dll for
Microsoft.Data.Odbc.dll in step 4.

Now add the code shown in Listing 4.1 to the btnConnection to open a connection to the pubs database on
SQL Server. This code opens a connection and displays the state of the connection before and after
attempting to open the connection.
Listing 4.1 Code to open a database connection and display its state

Private Sub btnConnection_Click (ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles _
btnConnection.Click
' Create an instance of an Connection object
Dim cnn As SqlConnection = New SqlConnection()
' Set the connection string
cnn.ConnectionString = _
"server=localhost;uid=sa;database=pubs"
txtResults.Clear()
' display connection state
If (cnn.State = System.Data.ConnectionState.Open) Then
txtResults.Text = txtResults.Text & "Connection is Open"
Else
txtResults.Text = txtResults.Text & "Connection is Closed"
End If
txtResults.Text = txtResults.Text & ControlChars.CrLf
' Open the Connection
txtResults.Text = txtResults.Text & "Opening DB connection . . ." _
& ControlChars.CrLf
cnn.Open()
' display connection state
If (cnn.State = System.Data.ConnectionState.Open) Then
txtResults.Text = txtResults.Text & "Connection is Open"
Else
txtResults.Text = txtResults.Text & "Connection is Closed"
End If

txtResults.Text = txtResults.Text & ControlChars.CrLf


End Sub

Tip
A useful new feature of VB.NET is the ability to get a text string representation of an enumeration
(enum) value automatically, rather than having to write a routine that performs a select-case
statement over all the possible values for the enumeration. All enumeration types, which are
objects, inherit the ToString method that returns the string corresponding to its current value.
In Listing 4.1, you can replace the If-Else statements that display the connection state with a
single line. Thus you can replace the lines

' display connection state


If (cnn.State = System.Data.ConnectionState.Open) Then
txtResults.Text = txtResults.Text & "Connection is Open"
Else
txtResults.Text = txtResults.Text & "Connection is Closed"
End If
with
' display connection state
txtResults.Text = txtResults.Text & "Connection is" & _
cnn.State.ToString & ControlChars.CrLf

When you run the DataProviderObjects project and click on the Connection button, the textbox should
indicate that the connection is closed, being opened, and then open, as shown in Figure 4.3.
Figure 4.3. Before and after results of opening a connection, using the code in Listing 4.1

Note
When writing production code, you need to decide on and implement an error handling strategy
for most routines and operations. This strategy should normally be based on the Try-Catch block
error handling structure. We don't normally include this code in our examples because our
purpose is to focus on database programming concepts, rather than general practices for
programming in VB.NET.

[ Team LiB ]

[ Team LiB ]

The Command Object


The ADO.NET Command object should also seem very familiar to experienced ADO 2.X programmers. Like the
ADO.NET Connection object, the Command object is similar to its ADO 2.X predecessor. This object allows
you to execute commands against a data source and obtain the returned data and/or results, if applicable.
As expected, it has the CommandText and CommandType properties to define the actual command text and
type, the Connection property to specify a connection to be used to execute the command, and the
CommandTimeout property to set the waiting time for a command to complete before giving up and
generating an error. It also has a Parameters property, which is a collection of parameters to be passed to
and/or from the executed command. Finally, unlike the classic ADO Command object, a Transaction
property specifies the transaction in which the command executes.
All three versions of our Command object (OleDb, Sql, and Odbc) have identical properties and methods,
with one exception. The SqlCommand has an additional method that the other two don't have: the
ExecuteXmlReader method. It takes advantage of SQL Server's ability to return data automatically in XML
format (when the FOR XML clause is added to the SQL Select query).

Note
Another difference among the versions of the Command object for the different data providers has
to do with the values for the CommandType property. All three support either Text or
StoredProcedure, but the OledbCommand object also supports a third possible value of
TableDirect. This method efficiently loads the entire contents of a table by setting the
CommandType to TableDirect and the CommandText to the name of the table.

Let's continue with the form you prepared, as illustrated in Figure 4.1.

1. Add an additional button immediately below the btnConnection button from the Windows Forms tab of
the Toolbox.
2. In the Properties window, set the Name property of the button to btnCommand and set the Text
property to Command.
3. Add the code for this btnCommand button, as shown in Listing 4.2.
Listing 4.2 Code to open a database connection and prepare a command object

Private Sub btnCommand_Click(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles btnCommand.Click
' Create an instance of an Connection object
Dim cnn As SqlConnection = New SqlConnection( _
"server=localhost; uid=sa;database=pubs")

' Create instance of Command object


Dim cmd As SqlCommand = New SqlCommand()
txtResults.Clear()
' Set command's connection and command text
cmd.Connection = cnn
cmd.CommandType = CommandType.Text
cmd.CommandText = "Select au_lname, state from authors"
' Write out command string
txtResults.Text = "Command String:" & ControlChars.CrLf
txtResults.Text = txtResults.Text & ControlChars.Tab & _
cmd.CommandText() & ControlChars.CrLf
End Sub

When you run the DataProviderObjects project and click on the Command button, the textbox should display
the SQL statement that you assigned as the CommandText of the SqlCommand object: Select au_lname,
state from authors.

Note
Many of the .NET Framework classes, as well as classes written by other developers, have
overloaded object constructors. In other words, there are several different ways of creating a new
instance of the class, where each constructor takes a different set of arguments. You choose the
version that best suits your current usage or need.
The constructor used in Listing 4.2 for the SqlConnection object is different from the one used
in Listing 4.1. There, we first used the default constructor, which never takes an argument. We
later assigned the connection string to the SqlConnection object by setting the
ConnectionString property, which resulted in:

' Create an instance of an Connection object


Dim cnn As SqlConnection = New SqlConnection()
' Set the connection string
cnn.ConnectionString = "server=localhost; uid=sa;database=pubs"

In Listing 4.2, we used a constructor for the SqlConnection object that accepts a connection
string as a parameter. That allowed us to create the object and assign it a connection string all in
one placein a single line of code, which resulted in:

' Create an instance of an Connection object


Dim cnn As SqlConnection = New SqlConnection( _
"server=localhost;uid=sa;database=pubs")

Using the Command Object with Parameters and Stored Procedures


When issuing queries or commands against a data source, you often need to pass in parameter values. That
is almost always true when you're executing an action (Update, Insert, or Delete) command and calling
stored procedures.To meet these needs, the Command object contains a Parameters property, which is a
ParameterCollection object, containing a collection of Parameter objects. Again, this feature is very
similar to ADO 2.X.
A Parameter (and the ParameterCollection) object is closely tied to its respective data provider, so it is
one of the objects that must be implemented as part of an ADO.NET Data Provider. There is a significant
difference between programming with the SqlParameter-Collection versus the
OdbcParameterCollection and the OledbParameterCollection. The OdbcParameterCollection and
the OledbParameterCollection are based on positional parameters, whereas the
SqlParameterCollection is based on named parameters. This difference affects the way you define both
queries and parameters.
Let's start with a simple parameter query against the authors table in the pubs database. Say that you want
to return all the authors from a particular state. On the one hand, if you were using the Oledb or Odbc Data
Provider, the query would look like

Select state, au_fname, au_lname from authors where state = ?

where the placeholder for the parameter is a question mark. Placeholders for additional parameters would
also be question marks. The way the parameters are differentiated from each other is by position. That is,
the order in which the parameters are added to the ParameterCollection must match exactly the order in
which they appear in the query or stored procedure.
On the other hand, if you were using the SqlClient Data Provider, the query would look like

Select state, au_fname, au_lname from authors where state = @MyParam

where the placeholder for the parameter is the name of the specific parameter; additional parameters would
also be indicated by their specific names. Because parameters are differentiated from each other by name,
they can be added to the ParameterCollection in any order.
You can create a Parameter object explicitly by using the Parameter constructor (that is, New) or by
passing the required arguments to the Add method of the ParameterCollection objectthe Parameters
property of the Command object. Remember also that each of these two methodsthe Parameter
constructor and the Add methodhave several overloaded options.
Here is one way to add a parameter to a command by explicitly creating the parameter object:

Dim myParameter As New OdbcParameter("@MyParam", OdbcType.Char, 2)


myParameter.Direction = ParameterDirection.Input
myParameter.Value = "CA"
cmd.Parameters.Add (myParameter)

And here is one way to add a parameter to a command by passing the arguments to the Add method:

cmd.Parameters.Add("@MyParam", OdbcType.Char, 2)
cmd.Parameters("@MyParam").Direction = ParameterDirection.Input
cmd.Parameters("@MyParam").Value = "CA"

The second method is shorter and is normally preferred, unless there is a reason to reuse the same
Parameter object.
You need to provide the parameter name along with its type and length (if appropriate) to the Parameter's
Add method. You can then set the direction to be either Input, Output, InputOutput, or ReturnValue.
The default direction is Input. Finally, if providing a value for the parameter, you assign this value to the
Value property of the parameter object. You could set several additional properties, including Scale,
Precision, and IsNullable.
If you were using the SqlClient Data Provider, you would have nearly identical code. The only differences are
that the Odbc prefixes would be replaced by Sql prefixes and that the type enumeration is named
SqlDbType:

Dim myParameter As New SqlParameter("@MyParam", SqlDbType.Char, 2)


myParameter.Direction = ParameterDirection.Input
myParameter.Value = "CAcmd.Parameters.Add (myParameter)

or

cmd.Parameters.Add("@MyParam", SqlDbType.Char, 2)
cmd.Parameters("@MyParam").Direction = ParameterDirection.Input
cmd.Parameters("@MyParam").Value = "CA"

Tip
The way to properly (successfully) pass in a null value for a parameter is by using the Value
property of the DBNull object. The line of code is

cmd.Parameters("@MyParam").Value = DBNull.Value

Modify the code for the btnCommand button, as shown in Listing 4.3. When you run the program and click
on the btnCommand button, the text of the query and the name and value of the parameter will be
displayed.

Listing 4.3 Code to prepare and display command and parameter objects

Private Sub btnCommand_Click(ByVal sender As System.Object, _


ByVal e As System.EventArgs)Handles btnCommand.Click
' Create an instance of an Connection object
Dim cnn As SqlConnection = New SqlConnection( _
"server=localhost;uid=sa;database=pubs")
' Create instances of Command and paramter objects
Dim cmd As SqlCommand = New SqlCommand()
Dim prm As SqlParameter = New SqlParameter()
txtResults.Clear()
' Open the Connection
cnn.Open()
' Set command's connection and command text
cmd.Connection = cnn
cmd.CommandType = CommandType.Text
cmd.CommandText = _
"Select au_lname, state from authors where state = @MyParam"
' Create parameter and set value
cmd.Parameters.Add(New SqlParameter("@MyParam", SqlDbType.Char, 2))
cmd.Parameters("@MyParam").Value = "CA"
' Write out command string
txtResults.Text = "Command String:" & ControlChars.CrLf
txtResults.Text = txtResults.Text & ControlChars.Tab & _
cmd.CommandText() & ControlChars.CrLf
' Write out command parameters and values
txtResults.Text = txtResults.Text & "Command parameters:" & _
ControlChars.CrLf
For Each prm In cmd.Parameters
txtResults.Text = txtResults.Text & ControlChars.Tab & _
prm.ParameterName & "=" & prm.Value & ControlChars.CrLf
Next
End Sub

You call stored procedures in the same way, except that the CommandType is
CommandType.StoredProcedure rather than CommandType.Text. The name of the stored procedure is
assigned to the CommandText property. Thus, calling the stored procedure named GetAuthorsFromState,
which expects a two-character parameter, would look like

cmd.CommandType = CommandType.StoredProcedure
cmd.CommandText = "GetAuthorsFromState"
cmd.Parameters.Add("@MyParam", SqlDbType.Char, 2)
cmd.Parameters("@MyParam").Direction = ParameterDirection.Input
cmd.Parameters("@MyParam").Value = "CA"

Tip
When specifying a stored procedure to be called by using the OdbcCommand, you must take care
to use the standard ODBC stored procedure escape sequences, rather than just specifying the
procedure name for the CommandText. Question marks are used as placeholders for the
parameters in the escape sequence. The OdbcCommand equivalent of the previous code section is

cmd.CommandType = CommandType.StoredProcedure
cmd.CommandText = "{GetAuthorsFromState ?}"
cmd.Parameters.Add("@MyParam", OdbcType.Char, 2)
cmd.Parameters("@MyParam").Direction =
Pa ram e terParameterDirection.Input
cmd.Parameters("@MyParam").Value = "CA"

If the stored procedure also returns a return value, it is specified by preceding the procedure
name with "? =", as in

cmd.CommandText = "{? = GetAuthorsFromState ?}"

If you're expecting a called stored procedure to return a value, you would specify the direction to be Output
and then read the Value property of the parameter after calling the stored procedure. In this example, we
also define a return value to be returned from the stored procedure. Because an SQL Server Int type is
specified, there is no need to specify a length for the parameter, as it is by definition four bytes long:

cmd.Parameters.Add(New SqlParameter("result", SqlDbType.Int))


cmd.Parameters("result").Direction = ParameterDirection.ReturnValue
cmd.Parameters.Add(New SqlParameter("@MyParam", SqlDbType.Int))
cmd.Parameters("@MyParam").Direction = ParameterDirection.Output
' Call stored procedure here
MsgBox (cmd.Parameters("@MyParam").Value)

Note
When defining a parameter to be a ReturnValue of a called stored procedure, you should define
it to be the first parameter added to the Parameters collection. This definition is required for the
Oledb and Odbc parameters because, as we pointed out earlier, they are treated as position-based
and a return value is expected to be in the first position. However, when working with Sql
parameters, you can place the return value parameter in any position because Sql parameters are
treated as named-based parameters.

Shortly, we present additional code examples involving the use of parameters as we show how to execute
these commands.

Executing the Commands


So far, you've seen how to set the various properties and parameters of a Command object, but you haven't
yet actually executed any of these commands! The time has come to do that. There are three standard
methods for executing the commands defined by a Command object and one additional method that is
available only with the SqlCommand object:

ExecuteNonQuery Executes an SQL command that does not return any records.
ExecuteScalar Executes an SQL command and returns the first column of the first row.
ExecuteReader Executes an SQL command and returns the resulting set of records via a
DataReader object.
ExecuteXmlReader (SqlCommand only) Executes an SQL command and returns the resulting set of
records as XML via a XmlReader object.
We now look at the first three shared execution methods. In Chapter 10, we discuss the ExecuteXmlReader
method as we explore the topic of ADO.NET and XML.

ExecuteNonQuery
The ExecuteNonQuery method is perhaps the most powerful way to execute commands against a data
source. This method allows you to execute commands that don't return any values (result set or scalar)
other than a value indicating the success or failure of the command. This method is also the most efficient
way to execute commands against a data source. You can execute an SQL statement or stored procedure
that is either (1) a Catalog or Data Definition Language (DDL) command, which can create or modify
database structures such as tables, views, or stored procedures; or (2) an Update command (Update, Insert,
or Delete) that modifies data in the database.

Note
The ExecuteNonQuery method returns a single integer value. The meaning of this return value
depends on the type of command being executed.
If you're executing a Catalog or DDL command to modify database structures, the value of the
method's return value is -1 if the operation completed successfully. If you're updating records with
an Update, Insert, or Delete statement, the return value is the number of rows affected by the
operation. In either case, the return value of the method is 0 if the operation fails.

Continuing with the DataProviderObjects project, you will now use the objects in the Oledb namespace and

work with pubs database. Your task is to create a new table for this database by executing the required DDL
command. This table will map between zip codes and states. The field definitions match those used in the
pubs database (which are different from those used in the Novelty database). The table is to have two
fieldsone for the zip code and another for the corresponding state. The SQL statement to create this table
is

CREATE TABLE tblStateZipCodes (


ZipCode char (5) NOT NULL,
State char (2) NOT NULL )

Now modify the original Form1 by doing the following.

1. Open Form1 in the Visual Studio IDE.


2. In the upper left corner of the form, add another button from the Windows Forms tab of the Toolbox.
3. In the Properties window, set the Name property of the button to btnNonQuery and set the Text
property to ExecuteNonQuery.
Then add the code shown in Listing 4.4 for the Click event of this new button.
Listing 4.4 Code to create a database table, using the objects from the Oledb namespace

Private Sub btnNonQuery_Click(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles btnNonQuery.Click
'Create an instance of an Connection object
Dim cnn As OleDbConnection = New OleDbConnection( _
"provider=SQLOLEDB;server=localhost;uid=sa;database=pubs")
Dim sql As String
Dim result As Integer
'Create instance of Command object
Dim cmd As OleDbCommand = New OleDbCommand()
'Set command's connection and command text
cmd.Connection = cnn
cmd.CommandType = CommandType.Text
' Assign SQL statement to create a new table
sql = "CREATE TABLE tblStateZipCodes ( " & _
"ZipCode char (5) NOT NULL," & _
"State char (2) NOT NULL )"
cmd.CommandText = sql
' Open the Connection before calling ExecuteNonQuery()
cnn.Open()
' We need to put the code inside a Try- Catch block
' since a failed command ALSO generates a run' time error
Try
result = cmd.ExecuteNonQuery()
Catch ex As Exception
' Display error message
MessageBox.Show(ex.Message)

End Try
' Show results of command execution
If result = -1 Then
MessageBox.Show("Command completed successfully")
Else
MessageBox.Show("Command execution failed")
End If
cnn.Close()
End Sub

When you run the DataProviderObjects project and click on the ExecuteNonQuery button for the first time, a
message box should appear, indicating that the command completed successfully. You can verify that the
table was created correctly by looking at the list of tables for the Novelty database, using either the Visual
Studio Server Explorer (Chapter 1) or the SQL Server Enterprise Manager (Chapter 3).
If you then click on the ExecuteNonQuery button again, two message boxes will appear. The first is the text
of the message from the exception generated and is displayed from within the catch block, which offers the
specific reason for the failure. In this case the command was rejected because a table by that name already
exists in the database. A second message box is then displayed, notifying you that the command execution
failed.
In the same way, you can create a view or a stored procedure. To create a view named Employee-Jobs_view
that returns a result set containing job titles and employee names (sorted by job description), change the
SQL statement in Listing 4.3 to

sql = "CREATE VIEW EmployeeJobs_view AS" & _


"SELECT TOP 100 PERCENT jobs.job_desc," & _
"employee.fname, employee.lname" & _
"FROM jobs INNER JOIN" & _
"employee ON jobs.job_id = employee.job_id" & _
"ORDER BY jobs.job_desc"

Note
To include an ORDER BY clause in a view definition to sort the results, you must include a TOP
clause in the select statement.

To create a stored procedure that accepts a single parameter and returns a value as a return value, change
the SQL statement to that shown in Listing 4.5.
Listing 4.5 Code containing an SQL statement to create the AuthorsInState1 stored procedure

sql = "CREATE PROCEDURE AuthorsInState1 @State char(2)" & _


"AS declare @result int" & _
"select @result = count (*) from authors" & _

"where state = @State" & _


"return (@result)"

Note
Although the ExecuteNonQuery method returns only a single value, if you define any output or
return value parameters for the command, they are correctly filled with the parameter's data. This
approach is more efficient than executing a command that returns a result set or a scalar value.

Let's now turn to the second type of nonquery commanda database update command, which can be an
Update, Insert, or Delete command. These commands usually require parameters, especially when you're
using stored procedures (which you usually want to do, for performance reasons) to carry out these
operations.
Continuing with Form1 in the DataProviderObjects project, suppose that the publisher that has implemented
the pubs database is in a generous mood and has decided to increase the royalty percentage paid to its
authors. Adding a command button and a textbox to the form allows the publisher's CFO to enter the royalty
increase as a parameter to the Update command. You can do so as follows.

1. Add an additional button immediately below the cmdExecuteNonQuery button.


2. In the Properties window, set the Name property of the button to cmdUpdate and set the Text property
to Update.
3. Add a textbox immediately below this new button from the Windows Forms tab of the Toolbox.
4. In the Properties window, set the Name property of the button to txtParam1 and set the Text property
to 0. Setting the value of the txtParam1 to 0 ensures that if you run the program and forget to set this
value before clicking on the Update button, you won't do any damage or cause a run-time error.
5. Add the code for this new button, as shown in Listing 4.6.
Listing 4.6 Code to update database table, using SQL statement with a parameter

Private Sub btnUpdate_Click(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles btnUpdate.Click
Dim result As Integer
' Create an instance of an Connection object
Dim cnn As SqlConnection = New SqlConnection( _
"server=localhost;uid=sa;database=pubs")
' Create instance of Command object
Dim cmd As SqlCommand = New SqlCommand()
txtResults.Clear()
' Set command's connection and command text
cmd.Connection = cnn
cmd.CommandType = CommandType.Text
cmd.CommandText = "UPDATE roysched SET royalty = royalty + @param1"
' Create parameter and set value

cmd.Parameters.Add(New SqlParameter("@param1", SqlDbType.Int))


cmd.Parameters("@param1").Direction = ParameterDirection.Input
cmd.Parameters("@param1").Value = Val(txtParam1.Text)
' Open the Connection before calling ExecuteReader()
cnn.Open()
result = cmd.ExecuteNonQuery()
MessageBox.Show(result & "records updated", "DataProviderObjects")
cnn.Close()
End Sub

You can update the royalty table by running the DataProviderObjects project, setting an integer value in the
parameter textbox, and then clicking on the Update button. A message box should appear, indicating the
number of records modified. You can verify this result by using the SQL Server Enterprise Manager and
displaying the data from the roysched table before and after executing the update command from the demo
program.
You could perform the same update by using a stored procedure. That has the advantages of better
performance and centralized location. A possible disadvantage is that you may need a database
administrator (DBA)or at least someone who knows how to write stored proceduresas part of your
development team. In a large organization, it could take days to get a DBA to modify some stored
procedure(s). If you can do the job yourself, it should take less than a minute. You can add it by using either
the SQL Server Enterprise Manager or the SQL Query Analyzer, as described in Chapter 3. Alternatively, you
can use the DataProviderObjects project, by changing the SQL statement, as we have done previously.
Here is what the stored procedure would look like

CREATE PROCEDURE UpdateRoyalties


@param1 int
AS
UPDATE roysched SET royalty = royalty + @param1

In Listing 4.6, we need to change the Command object's CommandType and CommandText properties to call
the stored procedure. These two lines of code now are

cmd.CommandType = CommandType.StoredProcedure
cmd.CommandText = "UpdateRoyalties"

Running the modified program should produce the same results as before. Now, though, the update is
performed by a stored procedure rather than by an SQL statement from our application code.

ExecuteScalar
At times you may want to execute a database command that returns a scalar valuethat is, a single value.
Typical examples of such commands are SQL statements that perform an aggregate function, such as SUM or
COUNT. Other examples are lookup tables that return a single value or commands that return a Boolean
result. The ExecuteScalar method executes the given command and returns the first column of the first

row in the returned result set. Other columns or rows are ignored.
Let's add the following stored procedure to the pubs database:

CREATE PROCEDURE AuthorsInState2@param1 char(2)


AS
select count(*) from authors where state = @param1

The procedure AuthorsInState2 accepts a parameter that is a two-character state code and returns from the
authors table the number of authors in that state. This procedure is functionally equivalent to
AuthorsInState1, which was shown in Listing 4.5 but returns a result set rather than a value.

Note
There is a slight performance penalty when you use ExecuteScalar instead of
ExecuteNonQuery and pass the scalar value as a ReturnValue parameter. Then why use the
ExecuteScalar method? It's simpler and less work, as you don't have to deal with parameter
definitions in both the command definition and the calling code.

You call this procedure with the Odbc Data Provider objects.

1. Add an additional button immediately below the txtParam1 textbox.


2. In the Properties window, set the Name property of the button to cmdScalar and set the Text property
to ExecuteScalar.
3. Add the code for this new button, as shown in Listing 4.7.
Listing 4.7 Code to retrieve a scalar value from a stored procedure, using the Odbc Data Provider
objects

Private Sub btnExecuteScalar_Click(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles btnExecuteScalar.Click
Dim result As Integer
' Create an instance of an Connection object
Dim cnn As OdbcConnection = New OdbcConnection( _
"DRIVER={SQL Server};server=localhost;uid=sa;database=pubs")
' Create instance of Command object
Dim cmd As odbcCommand = New odbcCommand()
txtResults.Clear()
' Set command's connection and command text
cmd.Connection = cnn
cmd.CommandType = CommandType.StoredProcedure

cmd.CommandText = "{call AuthorsInState2(?)}"


' Create parameter and set value
cmd.Parameters.Add("@param1", OdbcType.Char, 2)
cmd.Parameters("@param1").Value = txtParam1.Text
' Open the Connection before calling ExecuteReader()
cnn.Open()
result = cmd.ExecuteScalar()
MessageBox.Show("Count is" & result, "DataProviderObjects")
cnn.Close ()
End Sub

Run the application and enter a two-character state code into the parameter textbox. When you click on the
ExecuteScalar button, a message box should appear, indicating the count of authors in that state. You can
verify this result by using the SQL Server Enterprise Manager to display the data from the authors table.

Note
The default data for the pubs database should yield a count of 2 for the state UT and of 15 for the
state CA.

ExecuteReader
In some ways, we saved the best (or most important) for last. The ExecuteReader method is what you call
in order to execute a command that returns a set of rows (records). In most database applications, it is
probably the execution method that you will use most of the time. This method executes a command that
returns a result set of data rows by means of a DataReader object. You scan the rows one at a time,
sequentially from the first one. We present more detail about the DataReader and give examples in the
next section.

[ Team LiB ]

[ Team LiB ]

The DataReader Object


The DataReader provides a forward-only, read-only, nonbuffered stream over the rows created by the
ExecuteReader method of the Command object. The DataReader is basically equivalent to a forward-only,
read-onlyrecordset in ADO 2.X. It doesn't support scrolling or updating, and is the fastest way to access
data from a data source. Because the data isn't buffered or stored in any cache, this method is a particularly
good choice for retrieving large amounts of data. Calling the Read method advances the DataReader to the
next record.
The fields of each row of data can be accessed by strongly typed accessors, in addition to the Fields
collection. Accessing field data via the fields collectionby field name and without regard to typeis done
similarly to accessing fields in an ADO 2.X record set, as in:

X = MyReader("Myfield")

Note
The DataReader doesn't have an explicit constructoryou can't create a new object instance by
using New(). You must call the ExecuteReader method of the Command object to instantiate a new
object.

Alternatively, when you know the data type of each field, you can access the data by using type-specific
methods. These methods fetch the column that indicated by a zero-based index tofor example, as with

X = MyReader.GetIn16(1)

or

Str = MyReader.GetString(2)

The first approach, with its simple name access, provides for improved readability, ease of use, and
compatibility with older programs. The second approach, although requiring more effort, provides greater
performance because it minimizes the number of type conversions performed.
Now make one last addition to Form1 of the DataProviderObjects project:

1. Add a button immediately below the cmdExecuteScalar button.


2. In the Properties window, set the Name property of the button to cmdExecuteReader and set the Text
3.

1.
2.
property to ExecuteReader.
3. Add the code for this new button, as shown in Listing 4.8.

Note
In addition to showing how to program the DataReader, this example also demonstrates some
other features. For instance, there is a third value in the CommandType enumeration for the
Command object's CommandType property. In addition to Text and StoredProcedure, there is
also TableDirect. This option indicates that the CommandText property specifies the name of a
table where all the columns are returned by the command. Only the Oledb Data Provider supports
this option.
Also, database views are normally dealt with as if they were tables. Therefore you can specify the
name of a view, rather than a table name, when the CommandType is TableDirect.

Listing 4.8 Code to create a DataReader and retrieve field values, using a database view and the
TableDirect command type

Private Sub btnExecuteReader_Click(ByVal sender As System.Object, ByVal e As


System.EventArgs) Handles btnExecuteReader.Click
' Create an instance of an Connection object
Dim cnn As OleDbConnection = New OleDbConnection( _
"provider=SQLOLEDB;server=localhost;uid=sa;database=pubs")
' Create instance of Command object
Dim cmd As OleDbCommand = New OleDbCommand()
txtResults.Clear()
' Set command's connection and command text
cmd.Connection = cnn
cmd.CommandType = CommandType.TableDirect
cmd.CommandText = "EmployeeJobs_View"
' Must open the Connection before calling ExecuteReader()
cnn.Open()
Dim reader As OleDbDataReader
reader = cmd.ExecuteReader()
While reader.Read()
txtResults.Text = txtResults.Text & reader("fname") & _
ControlChars.Tab & reader("lname") & _
ControlChars.Tab & ControlChars.Tab & _
reader("job_desc") & ControlChars.CrLf
End While
reader.Close()
cnn.Close()
End Sub

Note
Always remember to call Read () before trying to access data from the DataReader. Unlike a
recordset in ADO 2.X, which is automatically positioned on the first row immediately after being
loaded with data, the ADO.NET DataReader must be explicitly positioned to the first row by an
initial call to the Read method.

You could also write the While loop by using the more efficient, strongly typed, field accessors:

While reader.Read()
txtResults.Text = txtResults.Text & reader.GetString(1) & _
ControlChars.Tab & reader.GetString(2) & _
ControlChars.Tab & ControlChars.Tab & _
reader.GetString(0) & ControlChars.CrLf
End While

Another change that you might want to make, depending on your taste and style, is to combine the
declaration of the DataReader and the execution of the ExecuteReader method into a single line. You can
replace

Dim reader As OleDbDataReader


reader = cmd.ExecuteReader()

with

Dim reader As OleDbDataReader = cmd.ExecuteReader()

When you run the DataProviderObjects project and click on the ExecuteReader button, the textbox should
display the data from the EmployeeJobs_view, as shown in Figure 4.4.
Figure 4.4. Results of successfully running the ExecuteReader command of Listing 4.8

Note
Always call the Close method when you have finished using the DataReader object. If you have
output or return value parameters defined for the Command object that you're using, they aren't
available until the DataReader is closed. Also, the DataReader's connection is kept open until
you close either the DataReader or the connection.

The DataReader also provides an easy and efficient way of building a data-driven Web page, by binding it to
a WebForms DataGrid. We show how to do that in Chapter 11.

[ Team LiB ]

[ Team LiB ]

Using the Connection and Command Design-Time Components


In the Visual Studio Toolbox, the Data tab contains components that mirror some of the data access
components. They allow you to set property values via with the properties window at design time, rather
than doing everything in code at run-time. They also provide visual tools for setting some of the more
complex properties.
You can implement Listing 4.8 by using some of these components, as follows.

1. Add another form, Form2, to the DataProviderObjects project.


2. In the Properties window for Form2, set its Text property to the Connection and Command
Components.
3. Enlarge the size of Form2.
4. Add a textbox, Textbox1, to Form2.
5. In the Properties window, set the textbox Multiline property to True, and the ScrollBars property
to Both.
6. Enlarge Textbox1 to cover most of Form2.
7. From the Data tab of the toolbox, drag an OledbConnection component onto the design surface of
Form2. As this component isn't visible at run-time, it will appear in the component tray beneath the
form's design surface.
8. In the Properties window for this new OledbConnection component, named Oledb-Connection1, set the
ConnectionString property to

provider=SQLOLEDB;server=localhost;uid=sa;database=pubs

9. From the Data tab of the toolbox, drag an OledbCommand component onto the design surface of
Form2. As this component isn't visible at run-time, it too will appear in the component tray beneath
the form's design surface.
10. In the Properties window for this new OledbCommand component, OledbCommand1, set the
Connection property to the SqlConnection1 component and set the CommandText property to

select * from EmployeeJobs_view

11. Add the code for the Form2_Load event handler, as shown in Listing 4.9.

Tip
In Chapter 6 we present tools that enable you to prepare graphically the connection string and
command text, instead of having to enter the actual text strings as here.
Listing 4.9 Code to create a DataReader and retrieve field values with SqlConnection and

SqlCommand components

Private Sub Form2_Load(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles MyBase.Load
'Must open the connection before calling ExecuteReader
OledbConnection1.Open()
Dim reader = OledbCommand1.ExecuteReader()
TextBox1.Clear()
While reader.Read()
TextBox1.Text = TextBox1.Text & reader("fname") & _
ControlChars.Tab &_reader("lname") & _
ControlChars.Tab & ControlChars.Tab & _
reader("job_desc") & ControlChars.CrLf
End While
'Deselect all lines in textbox
TextBox1.SelectionLength = 0
reader.Close()
OledbConnection1.Close()
End Sub

12. Right-click on the DataProviderObjects project in the Solution Explorer and select Properties from the
pop-up menu displayed.
13. Select the General item in the Common Properties folder and then set the Startup object property to
Form2.
If you now run the DataProviderObjects project, the textbox on Form2 will display all the data from the
EmployeeJobs_view, as shown in Figure 4.5.
Figure 4.5. Results of displaying Form2, which utilizes the OledbConnection and OledbCommand
components

[ Team LiB ]

[ Team LiB ]

Other Data Provider Objects


We have already presented several data provider classes, such as the Parameter and Parameters objects,
in addition to the four core objects listed in Table 4.1. In Chapter 6 we explore the DataAdapter in depth,
after introducing DataSet and its associated objects in Chapter 5.
Before ending this chapter, let's look at one last data provider object: the Transaction object. Transactions
are used to ensure that multistep operations are completed in an "all or nothing" manner. That is, either all
the steps of the overall operation complete successfully or none of them do. The classic example of a
transaction is a bank transfer. The operation consists of two steps: subtracting an amount from one account
and adding that amount to a different account. We certainly want to avoid the situation where only the first
of these two steps is completed successfully!
ADO.NET Data Providers implement a Transaction object, which contains the fundamental methods
required for using transactions. The Commit method completes the current transaction, and the Rollback
method cancels the current transaction. The transaction is begun and the Transaction object is created by
calling the BeginTransaction method on an open Connection object. We show the Transaction object in
action in Business Case 4.1.

Business Case 4.1: Writing a Routine to Archive Old Orders By Year


Once a database system has been in use for a long time (the definition of "long" is relative), certain data can
and should be archived. Such archiving should be in addition to the mandatory regularly scheduled backups
for any production system. Archived data is data that you don't need to have constantly available (online)
but that you may need to access on occasion. By removing this data from the main online tables, you can
improve the performance of accessing those tables because there are fewer records to search or filter.
However, as an archived table is often stored in the identical table format, it can be accessed in a uniform
manner, if and when required. In this business case we guide you through development of a form to
accomplish a simple archive of data from the tblOrder table in the Novelty database. The form will allow you
to select the year of the orders that you would like to archive. After you select the desired year, the following
steps must occur.

1. A new table, tblOrderXXXX, is created in the Novelty database, where "XXXX" will be replaced with the
year of the orders in the archive table.
2. All the relevant records are copied from the tblOrder table to the tblOrderXXXX table.
3. All the copied records are deleted from tblOrder.
The tricky part here is that you want to ensure that, if any of the those steps fail, the entire operation will be
canceled. You don't want to have a new table if you can't put data into it. You don't want records in the
archive if you can't delete them from the main table. And you certainly don't want to delete the records from
tblOrder if you can't copy them to the archive table. You can make use of the Transaction object and have
the database roll back to its previous state in case there are any failures. Go ahead and build a form to do all
this. Doing so will also give you a chance to review and practice much of what we presented in this chapter.

1. Launch Visual Studio.NET.


2.
3.
4.
5.

1.
2.
3.
4.
5.
6.

Create a new Visual Basic Windows Application project.


Name the project BusinessCase4.
Specify a path for saving the project files.
Enlarge the size of Form1.
In the Properties window for Form1, set its Name property to frmArchive and its Text property to
Archive Orders.
7. Add a listbox named lstYears, a label named Label1, a button named btnOK, and a button named
btnCancel.
8. Set the label's Text property to Archive all orders for the year. Set the Text properties of the buttons
to OK and Cancel, respectively.
9. Arrange the controls as shown in Figure 4.6.
Figure 4.6. Arrangement of the controls on frmArchive

At the top of the file insert the first line of code, to import the SqlClient namespace:

Imports System.Data.SqlClient

Within the body of the class definition for frmArchive add the code shown in Listing 4.10.
Listing 4.10 Code to archive data to a new table

Private Sub frmArchive_Load (ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles MyBase.Load
lstYears.Items.Add("1995")
lstYears.Items.Add("1996")
lstYears.Items.Add("1997")
lstYears.Items.Add("1998")
lstYears.Items.Add("1999")
lstYears.Items.Add("2000")

lstYears.Items.Add("2001")
lstYears.Items.Add("2002")
'Set Default
lstYears.SelectedIndex = 0
End Sub
Private Sub btnCancel_Click (ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles btnCancel.Click
Me.Close()
End Sub

Private Sub btnOK_Click(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles btnOK.Click
Dim
Dim
Dim
Dim

sql As String
result As Integer
records As Integer
SelectedYear As String

'Create an instance of an Connection and command objects


Dim cnn As SqlConnection = New SqlConnection ( _
"server=localhost;uid=sa;database=novelty")
Dim cmd As New SqlCommand()
Dim trans As SqlTransaction
'First get year
SelectedYear = lstYears.SelectedItem.ToString
'Put the code inside a Try-Catch block to trap failures
Try
'Open the Connection and begin transaction
cnn.Open()
trans = cnn.BeginTransaction
'Enlist the command in this transaction
cmd.Connection = cnn
cmd.Transaction = trans
'SQL to insert appropriate records into archive table
sql = "SELECT * INTO tblOrder" & SelectedYear & _
"FROM tblOrder WHERE year (OrderDate) =" & SelectedYear
'This command is in the transaction.
cmd.CommandText = sql
result = cmd.ExecuteNonQuery()
'Show results of inserting records into archive
If result > 0 Then
records = result

MessageBox.Show(records & _
"records inserted successfully into tblOrder" _
& SelectedYear)
Else
MessageBox.Show( _
"No records inserted into tblOrder" _
& SelectedYear)
'Since no records, don't keep created
'table cancel / rollback transaction
trans.Rollback()
End If
If records > 0 Then
'SQL to delete appropriate records from current
'table
sql = "delete FROM tblOrder WHERE year(OrderDate) =" _
& SelectedYear
'This command is also in the same transaction
cmd.CommandText = sql
result = cmd.ExecuteNonQuery()
'Show results of deleting records
If result = records Then
MessageBox.Show(records & _
"records deleted successfully")
'If we got to here, then everything
'succeeded
trans.Commit()
Else
MessageBox.Show( _
"Wrong number of records deleted!")
trans.Rollback()
End If
Else
'nothing to do
End If

Catch ex As Exception
'If we got to here, then something failed and
'we cancel (rollback) the entire transaction.
Try
'Display error message.
MessageBox.Show(ex.Message & _
ControlChars.CrLf & ControlChars.CrLf & _
"Transaction Failed !")

trans.Rollback()
Catch ex2 As Exception
End Try
Finally
cnn.Close()
End Try
End Sub

The routine frmArchive_Load initializes lstYears with the years to choose from and selects the first
(earliest) year by default. You could, of course, improve this routine so that it queries tblOrder to retrieve a
list of years that have orders in that table. For now, the simpler method will suffice.
The Click event handler for the Cancel button simply closes the form, which in this case will also end the
program. All the action takes place in the btnOK Click event handler. After the variable declarations, you
should obtain the selected year from lstYears and save it for later. To ensure that you cancel the transaction
if any error (exception) occurs, you should wrap all the active code inside a Try-Catch-Finally block.
Because transactions are defined at the connection level, first open the connection and then create the
Transaction object by calling BeginTransaction on the open connection. This Connection object and
Transaction object are then assigned to the Command object that will be used to execute the database
commands.
The first two steps of creating the archive table and copying the specified rows into the new table are
performed in a single SQL statement by using the SELECT INTO statement. This is a regular SELECT
statement, with the insertion of an Into tablename clause. The table specified in this additional clause is
automatically created; the command generates an exception if the table already exists. The year that you
select is appended to tblOrder to create the name of the new archive table that is to be created.

Note
The SELECT INTO statement doesn't create any of the indexes that exist in the original table. You
would probably want to create indexes on one or more of the fields to improve the performance of
queries against that table.

The ExecuteNonQuery method is called to execute the SQL statement. This method returns the number of
rows affected. If this number is greater than zero, you know that all went well in creating and populating the
new table, and the routine can continue. Otherwise, either the table could not be created or there are no
rows to copy. In either case, the transaction is rolled back so that, even if the table was created successfully,
the database won't be cluttered with empty, useless tables.
So long as at least one record was added to the archive table, the process continues. The next step is to
delete the appropriate records from the original tblOrder. You do so with a simple DELETE statement, with
the selected year appended to the WHERE clause. If this method succeedsthat is, the number of affected
records equals the number of records inserted into the archive tableall is well and the transaction is
committed. Otherwise, something went wrong (such as changes to relevant records, permission denied, or

server down), and the entire transaction is rolled back. This rollback ensures that, if you failed to delete the
correct records from tblOrder, the archive table, tblOrderXXXX, will be deleted.
Up to this point, the routine handled those situations that occur as part of the sequential execution of the
routine. However, run-time exceptions must also be handled. For example, if you try to create a table that
already exists in the database, an exception is generated. Such exceptions are caught and handled in the
Catch block. The text of the exception is displayed and the entire transaction is canceled and rolled back.

Note
The second, nested Try-Catch block is needed to handle the case when the archive table cannot
be created (for example, because it already exists). The reason is that, although the transaction
was begun, no data modification statements were executed and therefore nothing was written into
the log that can be rolled back.

In the Finally block, the connection that was used is closed. That needs to be done whether or not an error
occurred.
Go ahead and experiment with this project, by creating archives for different years. You can verify its
operation by looking at the new archive table, as well as the original tblOrder (both before and after the
archiving). Don't forget that you can always reset the contents of tblOrder by running the scripts to create
and/or populate tblOrder.

[ Team LiB ]

[ Team LiB ]

Summary
In this chapter we introduced ADO.NET generally and the objects of a .NET Data Provider in particular. Data
providers are the ADO.NET interface to physical data stores and provide a connected mode programming
model. We explored with you the properties and methods of the Connection, Command , Parameter,
DataReader, and Transaction objects, including examples involving the standard SqlClient, Oledb, and
Odbc Data Providers. In Chapter 5, we show how the disconnected programming model, based on the
DataSet and DataAdapter objects, builds on these objects and examples.

Questions and Answers

Q1:

What I understand from this chapter is that ADO.NET is really designed for disconnected
use and that there is no support for server-side cursors or pessimistic locking. What do
I do if my existing application uses one of these, or if the specifications of my new
project require server-side cursors or pessimistic locking? Am I stuck with Visual Basic
6.0?

A1:

First of all, carefully analyze your application and be sure that you aren't stuck in an old way of
thinkingthat you aren't just used to always using pessimistic locking or server-side cursors. If,
howeverafter thinking about the optionsyou're convinced that you require either one of them,
or something else that ADO.NET doesn't support, don't give up. You can still use VB.NET for
these applications. The .NET Framework includes extensive support for COM interoperability,
which allows your .NET application to utilize COM objects and for your COM objects to utilize
managed (.NET) code. In other words, you can continue to use not only ADO 2.X, but also any
other COM objects for which you don't yet have .NET replacements. Of course, there is no free
lunch, and the price to be paid is the performance hit you will suffer when going between the two
(COM and .NET) worlds. Is it too much of a performance degradation? Like most performance
questions, the answer is that you have to test it for yourself with your specific application.

Q2:

Programming the objects, methods, and properties discussed in this chapter doesn't
seem all that different from what I have been doing with ADO 2.X. Why should I bother
making the switch to ADO.NET?

A2:

In one sense, you're right. Performing these same basic connected-to-the-database operations
isn't much different from using ADO 2.X. However, in addition to the minor improvements and
conveniences that we've mentioned, you should keep several important points in mind.

1. Visual Basic.NET and the .NET platform is a whole new world, and ADO.NET is the way to
access data in this world.
2. Although you can continue to use existing COM components such as ADO 2.X while
developing .NET applications, you will still suffer performance overhead in accessing these
COM components and the need to install and register them properly.
3. The data provider objects discussed in this chapter are only part of the story, dealing with

3.
physically reading from and writing to the database in a connected mode The other part of
the story, the disconnected mode of operation centered on the DataSet object, is where
ADO.NET really shines and has many obvious advantages. We needed to cover this
chapter's basic objects first, as they are the necessary building blocks for what comes next.

[ Team LiB ]

[ Team LiB ]

Chapter 5. ADO.NETThe DataSet


IN THIS CHAPTER

Applications and Components of the DataSet


Populating and Manipulating the DataSet
Using the DataSet Component
The DataSet object is the central, if not the most revolutionary, element of the ADO.NET approach to data
access. The DataSet is an in-memory cache of data from one or more data sources. You can think of it as a
full-featured, in-memory database. Perhaps its most distinctive characteristic is that it is used in a totally
disconnected fashion. The managed .NET Data Provider objects discussed in Chapter 4 provide functionality
while being physically connected to a database or other data source. In contrast, the DataSet and its
related objects (such as the DataTable, DataRow , DataColumn, and DataRelation objects) provide rich
functionality while being physically disconnected from the actual data source.
Another key characteristic of the DataSet is that, once it has been loaded with data, it doesn't know where
that data came from. It just contains data and allows you to manipulate it in a relational manner. Thus the
DataSet and its objects are generic and not specific to any particular data provider. That is, there is only a
DataSet objectnot a SqlDataSet, OledbDataSet , and so on.
If a DataSet doesn't know the source of its data, how is it loaded? Moreover, how are changes updated back
to the data source? The answers to these questions normally comes from the DataAdapter object, which is
the bridge between DataSet and the physical data source. The DataAdapter is supplied commands to read
the data from the data source, as well as commands to update, delete, and insert data to that data source.
We cover the DataAdapter in detail in Chapter 6.

[ Team LiB ]

[ Team LiB ]

Applications and Components of the DataSet


The DataSet is the key object in ADO.NET. It is a universal container for data, regardless of the actual
source of the data. The DataSet and its associated objects offer a relational view of data, although it can
also load or save its data and/or schema information as XML. The DataSet provides an explicit in-memory
data model that is always fully disconnected from the data source and can easily be passed between address
spaces and machines.
Typical applications and uses for the DataSet include the following.

Application data The DataSet is a simple and flexible object for use in storing local application data.
Accessing the data is as easy as accessing data in an array, but the DataSet also provides advanced
features such as sorting and filtering.
Remoting data The DataSet automatically uses XML to marshal (transfer) data from one computer
to another. This capability greatly eases the development of applications with Web Services, SOAP, or
low-level remoting.
Caching data The DataSet can cache data during development of ADO.NET or other types of
distributed applications, avoiding multiple across-the-network hits.
Persisting data The DataSet provides methods to save its data and schema information in standard
XML formats.
User interaction The DataSet effectively supports user interaction for different GUIs by combining its
sorting, filtering, and scrolling capabilities with the ability to bind different views of the data to both
Windows and Web Forms.
The data in a DataSet is organized in one or more DataTables. Each DataTable is ignorant of the source
of the data it contains, which implies that the DataSet and DataTables are always disconnected from the
data source. The DataSet merely contains multiple data tables and provides for the data to be manipulated,
transported, or bound to user interface controls. Figure 5.1 shows the DataSet and its subsidiary objects.
Figure 5.1. The DataSet and its subsidiary objects

These subsidiary objects may be described as follows.

DataTable Each DataTable contains collections of DataRow , DataColumn, and Constraint


objects, along with collections of DataRelation objects linking to other parent and child tables. The
view of data offered by the DataTable is similar to that of the ADO 2.X record set.
DataColumn The DataColumn object is the basic unit for retrieving or defining the schema definition
for a DataTable. It contains the specific information for each column (field) in the DataTable,
including name, data type, and other attributes, such as the Unique, ReadOnly, AllowDBNull, and
AutoIncrement properties. It also has an Expression property, which allows you to calculate the
value for a column or to create an aggregate column.

DataRow The DataRow object represents a single record in the DataTable and is the object used to
add, retrieve, and/or modify data in the DataTable. You can navigate the Rows collection of a
DataTable sequentially or by direct access to specific rows.
DataRelation The DataRelation object defines a relationship between two tables in a DataSet .
It represents a classic parent-child or primary key-foreign key link between rows of data in the two
tables. You navigate the relations between the tables by means of the ChildRelations and
ParentRelations collections (of DataRelation objects) of the DataTable object.

Constraint The Constraint object defines a rule that can be enforced to ensure the integrity of
the data in the DataTable. It includes the familiar UniqueConstraint, which ensures that all
values within the table are unique, and a ForeignKeyConstraint, which determines the action to be
taken on rows in the related table. The defined constraint can include one or more DataColumns. Each
DataTable has a Constraints property, which is a collection of constraints for that table.
[ Team LiB ]

[ Team LiB ]

Populating and Manipulating the DataSet


There are three ways to fill a DataTable in a DataSet with data:

1. Programmatically by directly defining metadata and inserting data


2. Using a DataAdapter to query a data source
3. Loading an XML document
In this chapter we present the first method, that of directly defining the schema and loading the data. In
Chapter 6 we discuss the second method, that of using the DataAdapter. In Chapter 10 we cover the third
method, that of filling a DataTable with data from an XML document. We begin in this section by focusing
on populating the DataSet programmatically, including basic DataSet and DataTable access and
functionality.
Note, however, that once the DataSet has been loaded with data, the method used doesn't matter. All
subsequent operations involving this data and its manipulation are done identically.

Defining DataTable Schemas


Once again, we work with you to build a simple form to illustrate the concepts introduced. Begin by following
these steps.

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.

Launch Visual Studio.NET.


Create a new Visual Basic Windows Application project.
Name the project DataSetCode.
Specify a path for saving the project files.
Enlarge the size of Form1.
In the Properties window for Form1, set its Name property to frmDataSets and its Text property to
DataSets.
In the upper-left corner of the form, add a button from the Windows Forms tab of the Toolbox.
In the Properties window, set the Name property of the button to btnCreateDS and set the Text
property to Create DataSet.
From the Windows Forms tab of the Toolbox, add a listbox to frmDataSets and place it on the right
side of the form.
In the Properties window, set the Name property of the listbox to lstOutput.
Enlarge the listbox so that it covers about 80 percent of the area of the form.

The first piece of code that you need to add to the top of the file is

Imports System
Imports System.Data

Then the following code goes within the body of the class definition for frmDataSets:

Private dsEmployeeInfo As DataSet


Private Sub btnCreateDS_Click(ByVal sender As System.Object,
ByVal e As System.EventArgs) Handles btnCreateDS.Click
CreateDataSet()
AddData()
DisplayDataSet()
End Sub

The event handler subroutine btnCreateDS_Click will call three routines, one for each of the three phases
of this application. The variable dsEmployeeInfo is a DataSet accessed by each of the subroutines called
by btnCreateDS_Click .

Note
Although you normally use a DataSet to contain the DataTables that you are using, as here,
you could alternatively just define and use DataTable (and subsidiary) objects on their own. You
might do so with simple uses of a DataTable when you don't need to link multiple tables.

The first thing you need to do is to define the schema, or structure, for each table that you want to use. That
consists of defining a DataColumn object and setting its properties for each of the columns in the table. The
code for CreateDataSet is shown in Listing 5.1.
Listing 5.1 Code to create DataSet and DataTable schemas

Private Sub CreateDataSet()


'Create a "EmployeeInfo" DataSet
dsEmployeeInfo = New DataSet()
'Create an "Employees" Table
Dim dtEmployees As DataTable = New _
DataTable("Employees")
dtEmployees.CaseSensitive = False
dtEmployees.Columns.Add("FirstName", _
Type.GetType("System.String"))
dtEmployees.Columns.Add("LastName", _
Type.GetType("System.String"))
dtEmployees.Columns.Add("DepartmentID", _
Type.GetType("System.Int32"))
'Add Employees table to EmployeeInfo DataSet
dsEmployeeInfo.Tables.Add(dtEmployees)
' Create an "Departments" Table
' We'll do this one with different function overrides
' This approach is more lengthy for standard columns,
' but allows setting other column properties (e.g.
' ReadOnly & Unique) before adding the DataColumn to

' the Columns collection


Dim dtDepartments As DataTable
dtDepartments = New DataTable()
dtDepartments.TableName = "Departments"
dtDepartments.MinimumCapacity = 5
dtDepartments.CaseSensitive = False
Dim NewColumn As New DataColumn()
With NewColumn
.ColumnName = "ID"
.DataType = Type.GetType("System.Int32")
.ReadOnly = True
.Unique = True
.AutoIncrement = True
End With
dtDepartments.Columns.Add(NewColumn)
NewColumn = New DataColumn()
With NewColumn
.ColumnName = "DepartmentName"
.DataType = Type.GetType("System.String")
.Unique = True
.AllowDBNull = False
End With
dtDepartments.Columns.Add(NewColumn)
'Add Departments table to EmployeeInfo DataSet
dsEmployeeInfo.Tables.Add(dtDepartments)
End Sub

After an instance of the DataSet dsEmployeeInfo is created, we create the Employees table by using one of
the overloaded constructors of the DataTable and passing in the table name. We then set the
DataTable's CaseSensitive property. This property determines whether the sorting, searching, filtering,
and other operations of the DataTable are performed in a case-sensitive manner. By default, this value is
set to the parent DataSet object's CaseSensitive property, or to False if the DataTable was created in
de pen dently of a DataSet .

Note
The CaseSensitive property applies only to the data in the DataTable object. It doesn't affect
the case-sensitivity rules applied to DataTable objects themselves. A DataSet may contain two
or more tables (or relations) that have the same name but that differ in case, such as mytable and
Mytable. When that occurs, references to the tables must match exactly (case-sensitive search).
However, if there is only one such table, any reference to that name will succeed (case-insensitive
search).

We then add three column definitions to the dtEmployees DataTable by supplying the Columns Add
method with the column name and data type. Note that the data type specified is the .NET data type and not
some database data type. If we don't supply a data type, the column's type defaults to a string. The
Employees table is then added to the dsEmployeeInfo DataSet .

We repeat the process for the Departments table, only this time we use different constructor and function
overloads. They achieve the same goal in a different way. You can choose an approach based on personal
taste, company standards, or the specifics of the task.
The MinimumCapacity property of the dtDepartments DataTable is set to 5, specifying that the DataTable
instance should start by internally creating 5 rows. Setting this to a value other than the default of 25 allows
you to influence how the required resources are allocated and may optimize performance in critical
situations. Of course, these rows don't actually exist insofar as the user of the DataTable is concerned until
rows are actually added to the DataTable.
For these columns, we also set various column properties before adding them to schema defined by the
Columns collection. Properties such as ReadOnly, Unique, AllowDBNull, and Auto-Increment should be
familiar to you if you've had any experience building database applications.

ReadOnly = True indicates that the column's value cannot be modified.


Unique = True indicates that each of the values of this column in all the rows in the table must be
unique. This property is implemented by having a UniqueConstraint automatically created for the
column. We discuss this method in the Table Constraints section later in this chapter.

AllowDBNull = True indicates that null values are allowed for this column.
AutoIncrement = True indicates that the column's value is incremented each time a new row is
added to the table. You can specify the starting value and increment step with the
AutoIncrementSeed and AutoIncrementStep properties, respectively.

Note
The DataTable will accept a row with a value assigned for a column with AutoIncrement set to
True and will use the AutoIncrement value only if the column's value is different from its default
value.

Other common DataColumn properties include MaxLength (for columns of type String), DefaultValue ,
and Table, which returns a reference to the DataTable to which the column belongs.
A column can also be defined to have an expression that is used to calculate the column's value, create an
aggregate column, or filter rows. This expression can consist of column values from the current or other
rows, constants, operators, wildcard characters, aggregate functions, or other expression functions. For
more information and examples of column expressions, see the help topic for the DataColumn Expression
property.

Adding Data to a DataTable


Once you have defined a DataTable and its schema, you can start adding rows of data. The code in Listing
5.2, which you should add to frmDataSets, shows how to add data rows programmatically to a DataTable.

The subroutine AddData adds four rows of data to the Departments tables and three rows of data to the
Employees table in a three-step process.

1. Create a new instance of a DataRow for the desired table by calling the NewRow method of that
DataTable.
2. Assign values to columns of that row.
3. Add the row to the Rows collection of the table by passing the DataRow object to the Add method of
the table's Rows property.
Listing 5.2 Code to add data programmatically to a DataTable

Private Sub AddData()


Dim dtDepartments As DataTable = _
dsEmployeeInfo.Tables("Departments")
Dim dtEmployees As DataTable = _
dsEmployeeInfo.Tables("Employees")
'Add 4 records to Departments table
Dim rowDept As DataRow
rowDept = dtDepartments.NewRow
rowDept("DepartmentName") = "Administration"
dtDepartments.Rows.Add(rowDept)
rowDept = dtDepartments.NewRow
rowDept("DepartmentName") = "Engineering"
dtDepartments.Rows.Add(rowDept)
rowDept = dtDepartments.NewRow
rowDept("DepartmentName") = "Sales"
dtDepartments.Rows.Add(rowDept)
rowDept = dtDepartments.NewRow
rowDept("DepartmentName") = "Marketing"
dtDepartments.Rows.Add(rowDept)
'Add 3 records to the Employees table
Dim rowEmployee As DataRow
rowEmployee = dtEmployees.NewRow
rowEmployee("FirstName") = "Jackie"
rowEmployee("LastName") = "Goldstein"
rowEmployee("DepartmentID") = 2
dtEmployees.Rows.Add(rowEmployee)
rowEmployee = dtEmployees.NewRow
rowEmployee("FirstName") = "Jeffrey"
rowEmployee("LastName") = "McManus"
rowEmployee("DepartmentID") = 3
dtEmployees.Rows.Add(rowEmployee)
rowEmployee = dtEmployees.NewRow
rowEmployee("FirstName") = "Sam"
rowEmployee("LastName") = "Johnson"

rowEmployee("DepartmentID") = 3
dtEmployees.Rows.Add(rowEmployee)
End Sub

Note
You can also add a new row of data to a table by passing the Add method an array (of Objects)
containing the data in the order of the columns in the table definition. This approach to add the
last employee in Listing 5.2 would then look like:

Dim empData(2) As Object


empData(0) = "Sam"
empData(1) = "Johnson"
empData(2) = 3
dtEmployees.Rows.Add(empData)

Updating the DataSet


To update an individual row in a table, simply access the desired row and assign a new value to one of its
columns. If you wanted to change the department to which Sam Johnson belongs, you would write

dtEmployees.Rows(2) ("DepartmentID") = 2

Note
In this line of code the row specifier (2) is hard-coded, taking advantage of the fact that the order
of the rows in the table is known. This practice isn't a particularly good one, and we show a much
better way to find a specific row (or rows) in the section Accessing Data from a DataTable.

You can make as many changes as you want, but they all are pending until the AcceptChanges method is
called and commits the changes. There is also a RejectChanges method than cancels and rolls back any
changes made since the data was loaded or AcceptChanges was last called.

Note

The AcceptChanges (and RejectChanges) method is available at several different levels. The
DataTable, DataSet, and DataRow classes all support this method. A call to the DataSet 's
AcceptChanges will cause AcceptChanges to be called on each table in the DataSet . A call to
the AcceptChanges of a DataTable will cause AcceptChanges to be called on each of the rows
in that table. Thus you can commit any changes row by row by calling AcceptChanges for each
row individually, or in one fell swoop with a single call to AcceptChanges on the DataSet
containing the data. The same holds for the RejectChanges method.

Of course, entire rows may be added or deleted. We have already shown you how to add rows. One way to
delete a row is to call the Remove method of the DataRowCollection object (that is, the Rows property of
the DataTable object). This method actually removes the row from the collection. Another way is to call the
Delete method of a specific DataRow object. This method marks the row for deletion, which actually takes
place when AcceptChanges is subsequently called.
Once the Remove method has been called, all data for that row is lost. Even if RejectChanges is
subsequently called, the removed row won't be returned.

Row States and Versions


Every DataRow object has a property RowState that indicates the current state, or status, of the row. In
addition, each row maintains up to four different versions of the values for that row. As different editing
operations are performed on the row, its state and/or value versions will change. The RowState
enumeration is summarized in Table 5.1, and the DataRowVersion enumeration is summarized in Table
5.2.

Table 5.1. The RowState Enumeration


Enumeration
Member Name

Description

Unchanged

No changes have been made since the last call to AcceptChanges or since the row was
originally filled by a DataAdapter.

Added

The row has been added to a DataRowCollection (that is, the Rows property of
DataTable), but AcceptChanges hasn't been called.

Deleted

The Delete method has been called to delete the row, but AcceptChanges hasn't been
called.

Modified

The row has been modified, but AcceptChanges hasn't been called.

Detached

The row has been created but hasn't been added to a DataRowCollection or the
Remove method has been called to remove the row from a DataRowCollection or the
Delete method has been called to delete the row and AcceptChanges has been called.

Table 5.2. The DataRowVersion Enumeration

Enumeration
Member Name

Description

Original

The original values for the row. This version doesn't exist for a row whose RowState is
Added.

Current

The current (possibly modified) values for the row. This version doesn't exist for a row
whose RowState is Deleted .

Default

The default row version for the row, which depends on the rows current RowState. If
the RowState is Deleted , the default row version is Original. If the RowState is
Detached, the default row version is Proposed. Otherwise, the default row version is
Current .

Proposed

The proposed values for the row. This version exists only during an edit operation
(begun by calling BeginEdit and ended by calling either EndEdit or CancelEdit) or
for a row that hasn't been added to a DataRowCollection .

If a row's RowState is Deleted when AcceptChanges method is called, the row is removed from the
DataRowCollection . Otherwise, the Original row version is updated with the Current row version and
the RowState becomes Unchanged.
Conversely, if a row's RowState is Added when RejectChanges is called, the row is removed from the
DataRowCollection . Otherwise, the Current row version is updated with the Original row version and
the RowState becomes Unchanged.

Note
As not all four row versions are available in all situations, you can call the HasVersion method of
the DataRow to check on whether a specific version is available in the current state. It is passed
one of the four members of the DataRowVersion enumeration and returns a Boolean value,
indicating whether the specified version currently exists.

A few comments are in order regarding the Proposed version of a DataRow . When you call the BeginEdit
method of a DataRow , the normal actions and events are suspended, allowing the user to make multiple
changes to the row without causing the execution of validation rules. While in this mode, changes made are
not reflected in the Current version of the row. Instead, they are reflected in the Proposed version of the
row. Once the EndEdit method has been called, the Proposed values are transferred to the Current
values. Any changes can be canceled by calling CancelEdit before EndEdit . Note that none of the changes
are permanently committed until AcceptChanges is called.

Note
You can always access a specific version of a DataRow column (assuming that it exists) by
specifying the desired version as a second parameter to the DataRow Item method. That's true
whether you call the Item method explicitly or implicitly, as in

dtEmployees.Rows(2).Item ("lastname", DataRowVersion.Proposed)

or

dtEmployees.Rows(2)("lastname", DataRowVersion.Original)

Row and Column Errors


ADO.NET provides a flexible mechanism for defining and handling user-defined errors for the rows and
columns of a DataTable. This mechanism permits application-defined validation of data. It allows the
flagging of errors when they are detected but postpones resolution of these errors until a later point in the
application's workflow. (Don't confuse these errors with the regular system-defined run-time errors
(exceptions) handled by the .NET Frameworks standard exception handling mechanism, Try-Catch-Finally.)
When an application detects a validation error, it flags the error by setting an error description for a row or
an individual column. Setting a DataRow's RowError property indicates that the particular row contains an
error, such as:

myDataRow.RowError = "Something wrong

here"

Calling the SetColumnError method of a DataRow indicates an error in a specific column, such as:

myDataRow.SetColumnError (2, "Bad data in this column")

You can retrieve the error strings for a row or column by accessing the RowError property or by calling the
GetColumnError, respectively. You can clear these errors by setting the respective error strings to an
empty string (""). Or you can do so by calling a DataRow's ClearErrors method, which clears both the
RowError property and all errors that were set by calling SetColumnError.
The DataRow also has a property HasErrors that is True if the row currently has any errors (either at the
row or column level). This property value is reflected up to the table and DataSet levelsif HasErrors is
True for any row in a table, the HasErrors property of the table is also True. Similarly, if the HasErrors
property of any table in a DataSet is true, the HasErrors property of the DataSet is also True. The
DataTable's GetErrors method returns an array of DataRow objects that have errors. It provides a simple
mechanism to determine quickly whether any validation errors existand, if so, whereas shown in Listing
5.3.
Listing 5.3 Locating errors in all the tables of a DataSet

Private Sub ResolveErrors (myDataSet as DataSet)


Dim rowsWithErrors() As DataRow

Dim myTable As DataTable


Dim myCol As DataColumn
Dim currRow As Integer
For Each myTable In myDataSet.Tables
If myTable.HasErrors Then
' Get all rows that have errors.
rowsWithErrors = myTable.GetErrors()
For currRow = 0 To rowsWithErrors.GetUpperBound(0)
For Each myCol In myTable.Columns
' Find columns with errors and decide
' how to deal with it.
' A column's error is retrieved with:
' rowsWithErrors (currRow).GetColumnError(myCol)
Next
' Clear the row's errors
rowsWithErrors (currRow).ClearErrors
Next currRow
End If
Next
End Sub

Accessing Data from a DataTable


Because a DataSet and the DataTables that it contains are always fully populated and not connected to a
data source, the method of accessing data records is very different from that of ADO and other previous data
access models (such as ODBC, DAO, and RDO). As all the data is available simultaneously, there is no
concept of a current record. That in turn implies that there are no properties or methods to move from one
record to another. Each DataTable has a Rows property, which is a collection of DataRow objects.
Individual DataRow objects are accessed with an index or the For Each statement. Thus the ADO.NET
objects offer a simpler, easier, and more efficient array-like approach to navigating and accessing data
records.
Listing 5.4 shows the subroutine DisplayDataSet, which displays the contents of the tables that you
previously defined and loaded with data. It uses the preferred approach of looping through all elements of a
collectionin this case the Rows and Columns collectionsfor displaying the Employees table. It uses the
alternative method of accessing the row and column elements with a numeric index to display the
Department tables.
Listing 5.4 Code to display the data in DataTables

Private Sub DisplayDataSet()


Dim dr As DataRow
Dim dc As DataColumn
Me.lstOutput.Items.Add("DISPLAY DATASET")
Me.lstOutput.Items.Add("===============")

' Display data in Employees table


For Each dr In dsEmployeeInfo.Tables("Employees").Rows
For Each dc In _
dsEmployeeInfo.Tables("Employees").Columns
Me.lstOutput.Items.Add( _
dc.ColumnName & " : " & dr(dc))
Next
Me.lstOutput.Items.Add("")
Next
Me.lstOutput.Items.Add("")
' Display data in Departments table
' Show how to use index, instead of For Each
Dim row As Integer
Dim col As Integer
For row = 0 To _
dsEmployeeInfo.Tables("Departments").Rows.Count1
For col = 0 To _
dsEmployeeInfo. Tables("Departments").Columns.Count1
Me.lstOutput.Items.Add( _
dsEmployeeInfo.Tables("Departments").
Columns(col).ColumnName _
& " : " & _
dsEmployeeInfo.Tables("Departments").
Rows(row)(col))
Next col
Me.lstOutput.Items.Add("-")
Next row
End Sub

You can write the entire subroutine more generically by using generic loops not only for the rows and
columns, but also for the tables in the DataSet . Listing 5.5 shows this approach.
Listing 5.5 A generic implementation of DisplayDataSet

Private Sub DisplayDataSet(ByVal ds As DataSet)


' Generic routine to display the contents of a DataSet
' DataSet to be displayed is passed as parameter
Dim dt As DataTable
Dim dr As DataRow
Dim dc As DataColumn
Me.lstOutput.Items.Add("DISPLAY DATASET")
Me.lstOutput.Items.Add("===============")
' For Each dt In ds.Tables
Me.lstOutput.Items.Add("")
Me.lstOutput.Items.Add("TABLE: " & dt.TableName)
Me.lstOutput.Items.Add("")

For Each dr In dt.Rows


For Each dc In dt.Columns
Me.lstOutput.Items.Add( _
dc.ColumnName & " : " & dr(dc))
Next
Me.lstOutput.Items.Add("-")
Next
Next dt
End Sub

Note the overloading of the DisplayDataSet subroutine where we have an identically named routine with a
different signature. Our generic version accepts as a parameter the DataSet to be displayed.
With all the pieces in place, you can now run the DataSetCode project. When you click on the Create DataSet
button, the DataSet and tables will be created, filled, and displayed. The resulting output is shown in Figure
5.2.
Figure 5.2. Results of creating, filling, and displaying the Employees and Departments tables

Note
To test the generic version of DisplayDataSet from btnCreateDS_Click , add the ds
parameter to the invocation of DisplayDataSet in btnCreateDS_Click:

DisplayDataSet(dsEmployeeInfo)

Finding, Filtering, and Sorting Rows


At times you won't be interested in all the available rows of a DataTable. You might be interested in one
specific row or in a particular subset of the available rows. Two mechanisms let you be selective: the Find
and the Select methods.
The Find method is a method of the Rows property of the DataTable objectthat is, of the
DataRowCollection object. This method is used to find and return a single row specified by a value of the
primary key of the table.
Before you can use Find to locate a specific row in the Departments table defined in Listing 5.1, you must
define a primary key for the table. You do so by assigning one or more table columns to the table's
PrimaryKey property. (Even if the primary key is only a single column, the Primary-Key property is an
array of DataColumn objects.)
The following lines of code, which you can add to the end of the CreateDataSet procedure of Listing 5.1,
define the DepartmentName column as the primary key for the Departments table:

Dim pk(0) As DataColumn


pk(0) = dtDepartments.Columns("DepartmentName")
dtDepartments.PrimaryKey = pk

Note
When a single column defines the PrimaryKey for a DataTable, the AllowDBNull property of
the column is automatically set to False and the Unique property of the column is automatically
set to True. If the PrimaryKey comprises several columns, only the AllowDBNull property of
the columns is set to False.

Once you've defined a primary key, using the Find method is straightforward:

Dim desiredRow As DataRow


desiredRow = dtDepartments.Rows.Find("sales")

The variable desiredRow will be set to the DataRow with the corresponding primary key value, or it will be
set to Nothing if no such row is found.

If a table's primary key comprises more than one column, the desired values for each of the primary key
columns are passed as elements of an array (typed as Object) to the Find method:

' Set Primary Key


Dim pk(0) As DataColumn
pk(0) = dtEmployees.Columns("FirstName")
pk(1) = dtEmployees.Columns("LastName")
dtEmployees.PrimaryKey = pk
' Try to Find desired data
Dim desiredRow as DataRow
Dim desiredValues (1) as object
desiredValues(0) ="Sam"
desiredValues(1) ="Johnson"
desiredRow = dtEmployees.Rows.Find (desiredValues)

The Select method of the DataTable returns an array of DataRow objects. The rows returned may match
a filter criterion, sort order and/or a state specification (DataViewRowState).
The following lines of code return and display the first names of all of the employees whose last name is
"Johnson."

Dim selectedRows() As DataRow


selectedRows = dtEmployees.Select("LastName = 'Johnson'")
Dim i As integer
For i = 0 To selectedRows.GetUpperBound(0)
MessageBox.Show(selectedRows(i)("FirstName"))
Next

If you also wanted to have the returned rows sorted by first name is descending order, you could modify the
call to the Select method:

selectedRows = dtEmployees.Select( _
"LastName = 'Johnson'", "FirstName DESC")

Finally, specifying a state as an argument to the Select method allows you to retrieve specific versions of
rows from the table when you're in the midst of editing. For example, retrieving all the original values of
rows, despite the fact that many changes have been made (but Accept-Changes hasn't yet been called), is
done by specifying the OriginalRows row state in the Select method:

selectedRows = dtEmployees.Select(Nothing, Nothing, _


DataViewRowState.OriginalRows)

To select the newly added rows that have a last name of "Johnson", use

selectedRows = dtEmployees.Select("LastName = 'Johnson'", _


Nothing, DataViewRowState.Added)

If you also want to sort them by first name, as before, use

selectedRows = dtEmployees.Select("LastName = 'Johnson'", _


"FirstName DESC", DataViewRowState.Added)

The options that can be specified for the row state are shown in Table 5.3, which describes the members of
the DataViewRowState enumeration. References to modifications are to changes made since the table was
last loaded or AcceptChanges was called.

Table 5.3. The DataViewRowState Enumeration


Enumeration Member Name Description

Added

New rows that have been added

CurrentRows

All current rows (including new, modified, and unchanged rows)

Deleted

All rows that have been marked as deleted

ModifiedCurrent

The current versions of rows that have been modified

ModifiedOriginal

The original versions of rows that have been modified

None

None

OriginalRows

All original rows (including unchanged and deleted rows, but not new rows)

Unchanged

All rows that have not been modified

Table Relations
Because a DataSet can contain multiple tables, it is only natural to expect (at least if you've had some
exposure to relational databases) that you can create links, or relations, between those tables. In ADO.NET,
the DataRelation object provides this functionality.
A DataRelation link relates columns in two tables that have a Parent-Child or primary key-foreign key
relationship. The classic example of such a relationship is customers and orders, whereby a customer record
is related to one or more order records. The customer record is the parent, and the order(s) is(are) the
child(ren). We pursue this topic by using the example we started with earlierthe Department (parent) and
Employees (child) tables defined in our DataSet .
The DataRelation object supports two different functions.

It allows navigation between the related tables by making available the records that are related to a
record you're working with. If you're working with a parent record, the DataRelation provides the
child records. If you're working with a child record, it provides its parent record.
It can enforce referential integrity rules, such as cascading changes to related tables, when performing

operations on records in either table.


Continue with the form frmDataSets that you prepared earlier.

1. Add a button immediately below the btnCreateDS button from the Windows Forms tab of the Toolbox.
2. In the Properties window, set the Name property of the button to btnCreateRelations and set the Text
property to Create Relations.
3. Add the code shown in Listing 5.6.
Listing 5.6 Code to create and display table relations

Private Sub btnCreateRelations_Click( _


ByVal sender As System.Object, _
ByVal e As System.EventArgs) _
Handles btnCreateRelations.Click
Dim rel As DataRelation
CreateDataSet()
' Create the relation between the Departments and
' Employees tables
rel = dsEmployeeInfo.Relations.Add( _
"relDepartmentEmployees", _
dsEmployeeInfo.Tables("Departments"). Columns("ID"), _
dsEmployeeInfo.Tables("Employees").Columns("DepartmentID"))
DisplayRelations(dsEmployeeInfo)
End Sub
Private Sub DisplayRelations(ByVal ds As DataSet)
Dim rel As DataRelation
' Print the names of each column in each table through
' the Relations.
Me.lstOutput.Items.Add("")
Me.lstOutput.Items.Add("DISPLAY RELATIONS")
For Each rel In ds.Relations
' Display Relation Name
Me.lstOutput.Items.Add("NAME: " & rel.RelationName)
' Show Parent table & field
Me.lstOutput.Items.Add("PARENT: " & _
rel.ParentTable.ToString & "" & _
rel.ParentColumns(0).ColumnName)
' Show Child table & field
Me.lstOutput.Items.Add("CHILD: " & _
rel.ChildTable.ToString & "" & _
rel.ChildColumns(0).ColumnName)
Next
Me.lstOutput.Items.Add("")
End Sub

The first thing you need to do is to create the appropriate DataRelation object. Every DataSet has a
collection of Relations that is exposed as its Relations property. This property is of type
DataRelationCollection and supports several overloaded forms of the Add method. The form used in
Listing 5.6 takes three argumentsa name for the relation, a reference to a DataColumn in the parent
table, and a reference to a DataColumn in the child table. If the relation between the tables comprised more
than one column, a different form of the Add method could be called with arguments that were arrays of
DataColumn objects.
The DisplayRelations procedure simply loops across the relations in the Relations property of the
DataSet that it receives as an argument. For each relation that exists, the name of the relation, the name of
the parent table and column, and the name of the child table and column are displayed.

Note
To make DisplayRelations more generic, you could add code to loop across all the columns in
the ParentColumns and ChildColumns array properties, rather than just displaying the first
element as you've done here.

When you run the DataSetCode project and click on the Create Relations button, the listbox should display
the specifics of the relation created between the Employees and Departments tables.
In addition to the Relations collection of the DataSet , which contains all the relations defined between
tables in that DataSet , each DataTable has two collections of relations (properties): ParentRelations
and ChildRelations, which contain the relations between the DataTable and related parent and child
tables, respectively.
Now that you can access the relation definitions between tables, you can also navigate the tables and
actually retrieve the related data. Begin by adding another button and code to the form frmDataSets we
prepared earlier.

1. Add a button immediately below the btnCreateRelations button from the Windows Forms tab of the
Toolbox.
2. In the Properties window, set the Name property of the button to btnChildRows and set the Text
property to Child Rows.
3. Add the code shown in Listing 5.7.
Listing 5.7 Code to display parent and child data from related tables

Private Sub btnChildRows_Click (ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles btnChildRows.Click
Dim rel As DataRelation
CreateDataSet ()
AddData ()
'Create the relation between the Departments and
'Employees tables
rel = dsEmployeeInfo.Relations.Add( _

("relDepartmentEmployees", _
dsEmployeeInfo.Tables("Departments").Columns("ID"), _
dsEmployeeInfo.Tables("Employees").Columns("DepartmentID"))
DisplayChildRows(dsEmployeeInfo.Tables("Departments"))
End Sub
Private Sub DisplayChildRows(ByVal dt As DataTable)
Dim rel As DataRelation
Dim relatedRows () As DataRow
Dim row As DataRow
Dim col As DataColumn
Dim i As Integer
Dim rowData As String
Me.lstOutput.Items.Add ("")
Me.lstOutput.Items.Add ("CHILD ROWS")
For Each row In dt.Rows
For Each rel In dt.ChildRelations
Me.lstOutput.Items.Add( _
dt.TableName & ":" & _
rel.ParentColumns(0).ColumnName & _
"= " & row(rel.ParentColumns(0).ToString))
relatedRows = row.GetChildRows(rel)
'Print values of rows.
For i = 0 To relatedRows.GetUpperBound(0)
rowData = "****" & _
rel.ChildTable.TableName & ":"
For Each col In rel.ChildTable.Columns
rowData = rowData & "" & _
relatedRows(i)(col.ToString)
Next col
Me.lstOutput.Items.Add(rowData)
Next i
Next rel
Next row
End Sub

The button click handler, btnChildRows_Click, first creates the DataSet and DataTables by calling
CreateDataSet (shown previously in Listing 5.1) and then calls AddData (shown previously in Listing 5.2)
to fill the tables with data. It then creates the relation between the Employees and Departments tables,
using the line of code from the btnCreateRelations_Click procedure, shown previously in Listing 5.6.
Finally, DisplayChildRows is called, passing the Departments table as the parent table.

DisplayChildRows implements a triple-nested loop to display all the columns of data in each related table
(in this case only one) for each row in the parent table. For each row in the parent table passed in as an
argument, it goes through all the relations defined in the table's ChildRelations property, displays the
table's name, the column name in the parent table, and the value of that column in the current row. The
row's GetChildRows method is called, with the current relation as an argument, and an array of DataRows
is returned with the appropriate child rows. For each of these rows, all the column values are displayed,

prefixed by asterisks and the child table name.

Note
Some versions of GetChildRows accept an additional argument defining which version of the
rows to return (as defined in the DataRowVersion enumeration shown in Table 5.2). Equivalent
methods exist for getting the parent row or rows of a given child row.
You may question why the preceding statement refers to parent rows (plural). How can a child
have more than a single parent? The answer is that, although a relation normally will define a
single parent for each child row (unique parent column values), it also allows for defining
nonunique parent columns and therefore a set of methods for retrieving multiple parent rows
(GetParentRows), rather than a single parent row (GetParentRow).

When you run the DataSetCode project and click on the Child Rows button, the child rows in the Employees
table for each of the parent rows in the Department tables are displayed in the listbox, as shown in Figure
5.3.
Figure 5.3. Results of displaying parent and related child rows of the Departments and Employees
tables

Table Constraints
Constraints are rules used to enforce certain restrictions on one or more columns of a table. The purpose of
these rules is to ensure the integrity of the data in the table. ADO.NET supports two types of constraints:
UniqueConstraint and ForeignKeyConstraint. A UniqueConstraint ensures that all values for the
specified column(s) are unique within the table. A ForeignKeyConstraint defines a primary key-foreign
key relationship between columns in two tables and allows the specification of actions to be performed when
parent (primary key) rows are added, deleted, or modified. An attempted violation of the constraint results
in a run-time error.
Note that constraints are enforced only when the EnforceConstraints property of the DataSet containing
the table is set to True. The default value of this property is True.
Although constraints can be created directly, they will most often be created indirectly. In fact, you have
already created several constraints in the previous code examples. A UniqueConstraint object is
automatically created and added to the Constraints collection of a DataTable whenever you set the
Unique property of a DataColumn to True and whenever you create a primary key for a DataTable. In
addition, both a UniqueConstraint and a ForeignKeyConstraint are automatically created whenever
you create a DataRelation between two tables. The UniqueConstraint is created on the related
column(s) in the parent table and the ForeignKeyConstraint is created on the related column(s) in the
child table.

Note
You can create a DataRelation that relates two tables without actually creating the two
constraints just mentioned. The usefulness of this approach is questionable, but it is available.

Let's add some code to the form frmDataSets to display the constraints of the tables in a DataSet .

1. Add a button immediately below the btnChildRows button from the Windows Forms tab of the Toolbox.
2. In the Properties window, set the Name property of the button to btnConstraints and set the Text
property to Constraints.
3. Add the code shown in Listing 5.8.
Listing 5.8 Code to display both unique and foreign key constraints

Private Sub btnConstraints_Click(ByVal sender As _


System.Object, ByVal e As System.EventArgs) _
Handles btnConstraints.Click
Dim dt As DataTable
Dim rel As DataRelation
CreateDataSet ()
' Create the relation between the Departments and
' Employees tables
rel = dsEmployeeInfo.Relations.Add( _

"relDepartmentEmployees", _
dsEmployeeInfo.Tables("Departments").Columns("ID"), _
dsEmployeeInfo.Tables("Employees"). Columns("DepartmentID"))
For Each dt In dsEmployeeInfo.Tables
DisplayConstraints(dt)
Next dt
End Sub
Private Sub DisplayConstraints(ByVal dt As DataTable)
Dim i As Integer
Dim cs As Constraint
Dim uCS As UniqueConstraint
Dim fkCS As ForeignKeyConstraint
Dim columns() As DataColumn
Me.lstOutput.Items.Add("")
Me.lstOutput.Items.Add( _
"CONSTRAINTS FOR TABLE: " & dt.TableName)
Me.lstOutput.Items.Add( _
"====================================")
For Each cs In dt.Constraints
Me.lstOutput.Items.Add( _
"Constraint Name: " & cs.ConstraintName)
Me.lstOutput.Items.Add( _
"Type: " & cs.GetType().ToString())
If TypeOf cs Is UniqueConstraint Then
uCS = CType(cs, UniqueConstraint)
' Get the Columns as an array.
columns = uCS.Columns
' Print each column's name.
For i = 0 To columns.Length1
Me.lstOutput.Items.Add( _
"Column Name: " & _
columns(i).ColumnName)
Next i
ElseIf TypeOf cs Is ForeignKeyConstraint Then
fkCS = CType(cs, ForeignKeyConstraint)
' Get the child Columns and display them
columns = fkCS.Columns
For i = 0 To columns.Length1
Me.lstOutput.Items.Add( _
"Column Name: " & _
columns(i).ColumnName)
Next i
' Display the related (parent) table name.
Me.lstOutput.Items.Add( _

"Related Table Name: " & _


fkCS.RelatedTable.TableName)
' Get the related (parent) columns and
' display them.
columns = fkCS.RelatedColumns
For i = 0 To columns.Length1
Me.lstOutput.Items.Add( _
"Related Column Name: " & _columns(i).ColumnName)
Next i
End If
Me.lstOutput.Items.Add("")
Next cs
End Sub

The purpose of the btnConstraints_Click procedure is to respond to the button click; set up the
DataSet , DataTables, and DataRelation (using code written in previous listings); and then call
DisplayConstraints, which does all the interesting work.

DisplayConstraints is a generic routine that accepts a DataTable as a parameter and displays


information about the constraints defined for that table. It loops across all the constraints in the
Constraints property of the passed table. For each constraint, you need to test whether it is a
UniqueConstraint or a ForeignKeyConstraint. Both of these classes are derived from the abstract
Constraint class, so they can coexist within the same typed collection. However, each has a different set of
properties, so you need to identify the Constraint and then convert the object to the appropriate specific
type. For a UniqueConstraint, just display the names of all the (one or more) columns defined in the
constraint. For a ForeignKeyConstraint, also display the name of the related (parent) table along with
the names of related columns in that table.
The results of running the DataSetCode project and clicking on the Constraints button are shown in Figure
5.4. Remember that, although three constraints are shown (one on the Employees table and two on the
Departments table), none were explicitly created. They were automatically created when you set the
DataColumn unique property to True and when you created the relation between the tables.
Figure 5.4. Results of displaying the constraints of the Departments and Employees tables

The ForeignKeyConstraint object has three Rule properties that govern the actions taken during the
editing of related tables. The UpdateRule and DeleteRule properties define the action to be taken when a
row in a parent table is either updated or deleted. The options for these rules are defined in the Rule
enumeration, shown in Table 5.4.

Table 5.4. The Rule Enumeration


Enumeration
Member Name

Description

Cascade

The deletion or update made to the parent row is also made to the related child
row(s). It is the default value.

None

The deletion or update made to the parent row isn't made to the related child row(s).
This condition could create child rows having reference to invalid parent rows.

SetDefault

The deletion or update made to the parent row isn't made to the related child row(s).
Instead, the related column (foreign key) in the related child rows is set to the default
value defined for that column.

SetNull

The deletion or update made to the parent row isn't made to the related child row(s).
Instead, the related column is set to DBNull. This condition could create orphaned child
rows that have no relationship to parent rows.

The third Rule property is the AcceptRejectRule. This rule, whose value can be either Cascade or None,
defines whether invoking the AcceptChanges or RejectChanges method on a parent row causes
AcceptChanges or RejectChanges to be invoked automatically on the related child rows. The default is

Cascade , which means that if AcceptChanges or RejectChanges is called on a parent row, the
corresponding method will be automatically called on the related child rows. If the value is set to None,
calling one of these two methods on parent row doesn't affect the editing of the related child rows.

[ Team LiB ]

[ Team LiB ]

Using the DataSet Component


The Data tab of the Visual Studio toolbox contains a DataSet component that allows you to set property
values by means of the Properties window, rather than doing everything in code. This approach is analogous
to that in Chapter 4 for the Connection and Command objects. Let's run through the steps to configure a
DataSet and its associated objects with the same definitions used in the code samples in the previous
sections.
First, we define the DataSet and the table schemas.

1.
2.
3.
4.
5.
6.
7.

8.
9.
10.
11.

Add another form, frmDataSetComponent, to the DataSetCode project.


In the Properties window for frmDataSetComponent, set its Text property to DataSet component.
Enlarge the size of frmDataSetComponent.
From the Windows Forms tab of the Toolbox, add a listbox to Form1 and place it on the right side of
the form.
In the Properties window, set the name property of the listbox to lstOutput.
Enlarge the listbox so that it covers about 80 percent of the area of the form.
From the Data tab of the toolbox, drag a DataSet component onto the design surface of
frmDataSetComponent. In the dialog box that appears, select the Untyped dataset radio button and
click on the OK button. This component isn't visible at run-time, so it will appear in the component tray
beneath the form's design surface.
In the Properties window for this new DataSet component, named DataSet1, set the Name property to
dsEmployeeInfo.
Select the Tables property in the Properties window and then click on the ellipsis (" ") button to
display the Tables Collection Editor.
Click on the Add button to display the properties for the first table in the DataSet .
In the Table1 Properties panel set the TableName property to Employees. The results so far are shown
in Figure 5.5.
Figure 5.5. Tables Collection Editor after adding the Employees table

12. Select the Columns property in the Employees Properties panel and then click on the ellipses (" ")
button to display the Columns Collection Editor.
13. Click on the Add button to display the properties for the first column in the Employees table.
14. In the Column1 Properties panel set the ColumnName property to FirstName.
15. Click on the Add button to display the properties for the second column in the Employees table.
16. In the Column1 Properties panel set the ColumnName property to LastName.
17. Click on the Add button to display the properties for the third column in the Employees table.
18. In the Column1 Properties panel set the ColumnName property to DepartmentID and also set the
DataType property to System.Int32. The Columns Collection Editor for the Employees table is shown
in Figure 5.6.
Figure 5.6. Column Collection Editor for the Employees table

19. Click on the Close button in the Columns Collection Editor to return to the Tables Collection Editor for
the dsEmployeesInfo DataSet to add the Departments table.
20. Click on the Add button to display the properties for the second table in the DataSet .
21. In the Table1 Properties panel set the TableName property to Departments.
22. Select the MinimumCapacity property in the Departments Properties panel and set it to 5.
23. Select the Columns property in the Departments Properties panel and then click on the ellipses button
to display the Columns Collection Editor.
24. Click on the Add button to display the properties for the first column in the Departments table.
25. In the Column1 Properties panel set the ColumnName property to ID and also set the DataType
property to System.Int32.
26. In the ID Properties panel set the ReadOnly property to True, the Unique property to True, and the
AutoIncrement property to True.
27. Click on the Add button to display the properties for the second column in the Departments table.
28. In the Column1 Properties panel set the ColumnName property to DepartmentName.
29. In the DepartmentName Properties panel set the Unique property to True and the AllowDBNull
property to False.
30. Click on the Close button on the Columns Collection Editor to return to the Tables Collection Editor and
then click on its Close button to close the Tables Collection Editor.
You have now implemented the dsEmployeesInfo DataSet and its Employees and Department tables by
setting properties of design-time components to the same values that you set in the (run-time) code shown
in Listing 5.1.
Now continue setting the design-time components and defining the relations between the tables in the
DataSet .

1.

1. In the Properties window for the dsEmployeesInfo DataSet component, select the Relations property
and then click on the ellipsis button to display the Relations Collection Editor.
2. Click on the Add button to display the properties for the first relation in the DataSet .
3. Set the Name property to relDepartmentEmployees.
4. Set the ParentTable property to Departments by selecting Departments from the first drop-down
listbox.
5. Set the ChildTable property to Employees by selecting Employees from the second drop-down
listbox.
6. In the Columns section, set the first entry in the Key Columns column to ID by selecting ID from the
drop-down listbox. Doing so sets the value of the DataRelation's ParentColumns property.
7. In the Columns section, set the first entry in the Foreign Key Columns column to Department ID by
selecting DepartmentID from the drop-down listbox. Doing so sets the value of the DataRelation's
ChildColumns property.
8. Accept the default values for the Update, Delete, and AcceptReject rule properties by not changing
them.
9. Click on the OK button to close the Relations Collection Editor.
The only setting missing now is the PrimaryKey property for each table. To set it, do the following.

1. Select the Tables property in the Properties window of the dsEmployeeInfo component and then click
on the ellipses button to display the Tables Collection Editor.
2. In the Members pane, select the Employees table.
3. In the Employees Properties pane, select the PrimaryKey property and then the click on the arrow
button for the drop-down listbox.
4. Select the column or columns that comprise the primary key, from the list of available columns
displayed. If the primary key comprises multiple columns, be sure to select them in the desired order.
In this case, select the FirstName column and then the LastName column, as shown in Figure 5.7.
Figure 5.7. Selecting multiple columns to define the value of PrimaryKey property

5. Press the Enter key to accept the settings for the PrimaryKey Property.
6. In the Members pane, select the Departments table.
7. In the Departments Properties pane, select the PrimaryKey property and then the click on the arrow
button for the drop-down listbox.
8. Select the DepartmentName column from the list of available columns displayed and then press the
Enter key to accept the settings for the PrimaryKey Property.
9. Click on the Close button to close the Tables Collection Editor.
To show that you get the same results by using the design-time components as you did previously by using
pure code, copy and paste some routines from frmDataSets into frmDataSetComponent and then execute
frmDataSetComponent.

1. Select and copy the AddData routine (including all the code of the routine) from frmDataSets and
paste it into frmDataSetComponent.
2. Repeat step 1 for the DisplayDataSet and DisplayChildRows routines.
3. Add the following code for the frmDataSetComponent_Load event handler in frmDataSetComponent:

Private Sub frmDataSetComponent_Load(ByVal sender As _


System.Object, ByVal e As System.EventArgs) _
Handles MyBase.Load
AddData()
DisplayDataSet()
DisplayChildRows(dsEmployeeInfo.Tables("Departments"))
End Sub

4.

4. Right-click on the DataSetCode project in the Solution Explorer and select Properties from the pop-up
menu displayed.
5. Select the General item in the Common Properties folder and then set the Startup object property to
frmDataSetComponent.
If you now run the DataSetCode project, the listbox on frmDataSetComponent will display all of the data
from the DataSet as well as the related child rows of the Departments table, as shown in Figure 5.8.
Figure 5.8. Using the DataSet design-time component to display DataSet data and related child
rows

Note
You may find it interesting to review the code generated by the design-time componentsit will
be very similar to what you coded manually in the earlier sections of this chapter. You can see the
generated code by opening the code view for Form1 in the Visual Studio code editor and then
expanding the region named Windows Form Designer generated code.

[ Team LiB ]

[ Team LiB ]

Summary
We covered a lot of material in this chapter. That was necessary because the objects and concepts discussed
here (together with those in Chapter 4) form the basis of all the database applications that we will develop
with ADO.NET and Visual Basic.NET. We have shown how the DataSet and its associated objects, such as
the DataTable, DataRelation , DataRow , and DataColumn, provide great flexibility and a rich
programming model for handling data while disconnected from the physical data source. In Chapter 6, we
demonstrate how the DataAdapter can be used to fill the DataSet with data and to provide automatic
updating to the data source of the changes made to the DataSet .

Questions and Answers

Q1:

I understand that I can still access my data source data either directly (using data
commands) or indirectly (using disconnected). Are there any guidelines as to which
approach is better for different scenarios?

A1:

Using DataSets has several advantages over direct database access. It provides a simple and
uniform way to move data between the different tiers and locations of a distributed database
application, as well as between different applications, owing to its inherent XML support. It
provides a mechanism for data caching and allows you to sort, filter, and search these data
without having to access the data source for each operation. Finally, it allows you to fetch
multiple tables, possibly from different data sources, and to manipulate them either individually
or based on the relationships between them.
Directly accessing the data source by using the Command object has its own advantages. Some
operations, such as those that modify database structure, can be performed only by direct
access. Even for standard SQL statements or stored procedure calls, direct commands provide
more control over the timing or method of execution of the commands, which may facilitate
greater performance or scalability. Finally, the overhead of the memory requirements of the
DataSet can be reduced, especially when there is no application-driven need to cache the data,
such as when you're building a Web page or populating a listbox.
So when should you use direct database access rather than DataSets? Clearly, if you are
performing an operation that can be done only through a Command object, that is the way to go.
This situation includes calling stored procedures that perform manipulations and return only a
return value and/or parameter values, as well as database structure or DDL operations. You also
should avoid using DataSets (1) if the data is read-only; (2) if your use of it is to be short-lived
and loading and retaining the DataSet in memory doesn't pay; or (3) if the data is to be used
on the server and there is no need to pass the data to a different tier or computer. In most other
cases, it is usually best to use and take advantage of the flexibility of ADO.NET's DataSet
object.

[ Team LiB ]

[ Team LiB ]

Chapter 6. ADO.NETThe DataAdapter


IN THIS CHAPTER

Populating a DataSet from a Data Source


Updating the Data Source
Even with everything we presented in Chapter 5including the DataSet , DataTable, DataRow ,
DataColumn, DataRelation , and Constraint objectssomething is still missing. You might well be
tempted to stand up and scream (with flashbacks to Jerry Maguire): "Show me the database!" That's right,
we still haven't demonstrated how to load the DataSet with data from a database or other data source and
subsequently update the data source with the changes made to the data in the DataSet . The final piece of
the puzzle that allows you to do so is the DataAdapter object.
The DataAdapter is the intermediary, or bridge, between the DataSet and the actual data source. Thus it
is also the bridge between the two ADO.NET worldsthe connected world of the .NET Data Provider objects
discussed in Chapter 4 (Connection, Command , and DataReader) and the disconnected world of the
DataSet and its associated objects analyzed in Chapter 5.
Again, the DataAdapter is the final core object that is a required element of a .NET Data Provider. Each
.NET Data Provider must supply its own implementation of the DataAdapter (SqlDataAdapter ,
OledbDataAdapter, OdbcDataAdapter, and so on). The reason that we delayed presenting it until now is
that, without an understanding of the DataSet and its associated objects, you can't do anything useful with
the DataAdapter. We now show how the DataAdapter brings together and enhances everything that
we've presented so far.
Figure 6.1 illustrates the use of the DataAdapter. Its fundamental task is to manage the transfer of data
from the data source to the DataSet and from the DataSet to the data source. This task is accomplished
with a small set of essential properties and methods.

Figure 6.1. The DataAdapter is the bridge between the data source and the DataSet tables

The DataAdapter features two main methods. The Fill method fills a DataTable with data retrieved from
a data source, and the Update method updates a data source with changes made to the data in the
DataSet tables.
The DataAdapter contains a set of four command properties (SelectCommand, InsertCommand,
UpdateCommand, and DeleteCommand) that are ADO.NET Command objects configured for each of the
respective operations. The SelectCommand object is executed when the DataAdapter's Fill method is
called. When the DataAdapter's Update method is called, the appropriate one of the other three
command objects is executed for each modified DataRow . The database developer has complete control over
these commands, which allows for customization of the commands used for the Insert, Update, and Delete
operations.
Finally, a collection of table and column mappings, DataTableMappings , allows the definition of mappings
between data source and DataTable table and column names.

[ Team LiB ]

[ Team LiB ]

Populating a DataSet from a Data Source


The minimum that you need in order to make use of the DataAdapter is a connection and a Select
command. Although independent Connection and Command objects can be created, configured, and
assigned to the DataAdapter's Connection and SelectCommand properties, using the form of the
DataAdapter's constructor that accepts two string parametersone for the Select statement and one for
the connection stringis often more convenient. The code for this task is

Dim da As SqlDataAdapter = New SqlDataAdapter( _


"select * from tblDepartment", _
"server=localhost;uid=sa; database=novelty")

Note
Don't forget, the SQL statement that you specify for the SelectCommand can contain parameters.
If necessary, refer back to Chapter 4 to refresh your memory about how to define parameters for
the different .NET Data Providers.

Let's now call the Fill method to retrieve data from the Novelty database and load them into the DataSet .
First, we add the code to the frmDataSets form in the DataSetCode project begun in Chapter 5 by doing the
following.

1. Right-click on the DataSetCode project in the Solution Explorer and select Properties from the pop-up
menu displayed.
2. Select the General item in the Common Properties folder and then set the Startup object property to
frmDataSets.
3. Display frmDataSets in the Form Designer.
4. Add a button below the btnConstraints button from the Windows Forms tab of the Toolbox.
5. In the Properties window, set the Name property of the button to btnDataAdapterFill and set the Text
property to DataAdapter Fill.
6. As we'll be using the SqlClient data provider, we need to add after the existing import statements at
the top an imports statement for this namespace (shown in boldface type):

Imports System
Imports System.Data
Imports System.Data.SqlClient

7. Add the code shown in Listing 6.1 to frmDataSets.

7.
Listing 6.1 Using a SqlDataAdapter to fill the dsEmployeeInfo DataSet

Private Sub btnDataAdapterFill_Click(ByVal sender As _


System.Object, ByVal e As System.EventArgs) _
Handles btnDataAdapterFill.Click
ReadData()
End Sub
Private Sub ReadData()
Dim rows as Integer
Dim daDepartments As SqlDataAdapter = New _
SqlDataAdapter("select * from tblDepartment", _
"server=localhost;uid=sa;database=novelty")
dsEmployeeInfo = New DataSet()
rows = daDepartments.Fill(dsEmployeeInfo, "Departments")
DisplayDataSet(dsEmployeeInfo)
End Sub

After creating the daDepartments DataAdapter with the Select statement and the connection string, we
can call the Fill method to fill a table in the dsEmployeeInfo DataSet named Departments. The Fill
method also returns the number of rows added to (or refreshed in) the DataSet . The following steps are
implicitly performed by the DataAdapter in order to execute the Fill method:

Opens the SelectCommand's connection, if it is not already open.


Executes the command specified by the SelectCommand's CommandText property (and parameters,
if any).
Creates a DataReader to return the column names and types used to create a new Data-Table in
the specified DataSet , if it doesn't already exist.
Uses the DataReader to retrieve the data and populate the table.
Closes the DataReader.
Closes the connection, if it was opened by the DataAdapter. If it was originally found open, the
DataAdapter will leave it open.

Note
When executing a single command against a data source, you will usually find it simpler and more
efficient to let the DataAdapter internally create and manage the Command and Connection
objects by supplying the Select and connections strings when creating the DataAdapter.
However, if you're going to execute several commands against the same database, it is more

efficient to create and open a Connection object and then assign it to the DataAdapter. That
keeps the connection open rather than its repeatedly being opened and closed, which is a
significant performance hit. The equivalent code would then be

Private Sub ReadData()


Dim rows as Integer
Dim daDepartments As New SqlDataAdapter()
Dim conn As New SqlConnection( _
"server=localhost;uid=sa;database=novelty")
Dim cmdSelect As New SqlCommand( _
"select * from tblDepartment")
dsEmployeeInfo = New DataSet()
cmdSelect.Connection = conn
daDepartments.SelectCommand = cmdSelect
' Open the connection before starting operations
conn.Open()
rows = daDepartments.Fill(dsEmployeeInfo, "Departments")
' Do other database operations here
' . . .
DisplayDataSet(dsEmployeeInfo)
' When we are all done, close the connection
conn.Close()
End Sub

Of course, to make it really worthwhile, you would need additional database operations using the
same connection string.

We passed the Fill method a reference to a DataSet and the name of the DataTable to fill. We could also
have passed a reference to a DataTable instead of the name. Another option is to specify only the DataSet
and then Fill defaults to loading the data into a DataTable named Table.

Note
Although you would normally use the DataAdapter to fill a DataTable contained in a DataSet,
there is an overloaded version of the Fill method that loads data into a stand-alone DataTable
object.

If we want to load a second table, we can add a second DataAdapter with a different Select statement. To
load both the Department and Employees tables from their corresponding database tables, we replace the
routine ReadData in Listing 6.1 with:

Private Sub ReadData()


Dim rows as Integer
Dim daDepartments As SqlDataAdapter = New _
SqlDataAdapter("select * from tblDepartment", _
"server=localhost;uid=sa;database=novelty")
Dim daEmployees As SqlDataAdapter = New _
SqlDataAdapter("select * from tblEmployee", _
"server=localhost;uid=sa;database=novelty")
dsEmployeeInfo = New DataSet()
rows = daDepartments.Fill(dsEmployeeInfo, "Departments")
rows = daEmployees.Fill(dsEmployeeInfo, "Employees")
DisplayDataSet(dsEmployeeInfo)
End Sub

Running the DataSetCode project and clicking on the DataAdapterFill button would fill the listbox with results
similar to those obtained before. Only now, the data loaded into the DataSet and displayed in the listbox
comes from the SQL Server database, rather than being generated locally in code.
We could of course also create a DataRelation between the two tables, as we did earlier in the previous
chapter, to establish a parent-child relationship between the rows in the two tables.
Using a different DataAdapter for each table in the DataSet isn't always necessary. We can reuse the
same DataAdapter by modifying the commands that it uses. This approach is useful mainly for multiple
fillsprogrammatically creating and modifying all the commands (Insert, Update, Delete) for updating each
of the DataTables to the data source is more involved.

Note
It is also possible, and sometimes preferable, to fill a DataSet table with the result of using a SQL
Join to link two tables. Then there would be a single DataSet table and no need to create a
relation between them. However, using independent tables linked by a DataRelation is usually
more flexible. That's particularly true when it comes to updating the data source, as there often
are limitations on updating joined table. But, if we're actually updating them independently, these
limitations don't exist.

Thus we can rewrite the preceding code, using a single DataAdapter, as follows:

Private Sub ReadData()


Dim rows as Integer
Dim da As SqlDataAdapter = New _
SqlDataAdapter("select * from tblEmployee", _
"server=localhost;uid=sa;database=novelty")

dsEmployeeInfo = New DataSet()


rows = da.Fill(dsEmployeeInfo, "Employees")
' Change Select statement to different table
da.SelectCommand.CommandText = _
"select * from tblDepartment"
rows = da.Fill(dsEmployeeInfo, "Departments")
DisplayDataSet(dsEmployeeInfo)
End Sub

Note
A more efficient way of loading two tables in the DataSet would be to supply a SelectCommand
that either calls a stored procedure that returns multiple result sets or that executes a batch of
SQL commands. Doing so requires only a single round-trip to the server to retrieve all of the data,
as opposed to the multiple trips required by the code shown. However, although the retrieving of
multiple tables in this manner is straightforward, updating the data source with changes made to
the DataSet tables would be somewhat complicated, if there are relations between the tables.
We look at how to update such tables in Business Case 6.1 later in this chapter.

Listing 6.2 demonstrates how to reuse a single DataAdapter for multiple operations and how to merge
multiple Fills into a single DataTable.
Listing 6.2 Using a single SqlDataAdapter to perform multiple Fill operations to a single table

Private Sub ReadData()


Dim daEmployees As SqlDataAdapter = New SqlDataAdapter( _
"select * from tblEmployee where DepartmentID = 1", _
"server=localhost;uid=sa;database=novelty")
dsEmployeeInfo = New DataSet()
daEmployees.Fill(dsEmployeeInfo, "Employees")
'Change WHERE clause from DepartmentID = 1 to
' DepartmentID = 3
daEmployees.SelectCommand.CommandText = _
"select * from tblEmployee where DepartmentID = 3"
daEmployees.Fill(dsEmployeeInfo, "Employees")
DisplayDataSet(dsEmployeeInfo)
End Sub

Note that in Listing 6.2 the value returned by the Fill method in the local variable rows is no longer
captured. It isn't necessary to capture this returned value unless we intend to use or test it. We could take
the approach of Listing 6.2 one step farther and execute the same Select statement multiple times to

refresh the same DataTable with the most recent data (possibly modified by a different user) as it currently
exists at the data source.

Note
Existing values in a DataTable are updated only on a subsequent Fill operation if a primary key
is defined for the DataTable. The default operation of the Fill method is to fill the DataTable
with column schema information and rows of data, without setting any constraints that might be
configured at the data source. To set the PrimaryKey property correctlyso that such refreshes
(as well as the Find method) can be executedone of the following must be done before the
Fill method is called.

Call the FillSchema method of the DataAdapter.


Set the DataAdapter's MissingSchema property to AddWithKey.
Explicitly set the PrimaryKey property to the appropriate column(s) if they are known at
design time.

[ Team LiB ]

[ Team LiB ]

Updating the Data Source


Normally, after making all the desired changes to the tables in the DataSet , you will want to save those
changes back to the data source. To do so, call the Update method of the DataAdapter. When the Update
method is called, the DataAdapter analyzes the changes made to the specified table in the DataSet (or all
the tables if no table name is specified). For each row that has been changed, the appropriate command
(Insert, Update, or Delete) is executed against the data source to update it to the current data values. These
commands are specified by the InsertCommand, UpdateCommand, and DeleteCommand properties.

Note
The ability to easily specify custom SQL statements or stored procedures to be automatically used
when a data source is being updated from a DataSet is a major improvement over what was
available with ADO 2.X. With ADO.NET not only can you modify how updates are performed when
updating a batch of changed rows, but the ability to use stored procedures for this task offers
improved performance and customized (business) logic, in addition to the ability to specify the
SQL statements for the update operations. We present an example of this approach shortly.
Moreover, the batch update mechanism works with even non-SQL data sources, unlike ADO 2.X,
where the batch update worked only with an SQL-based data source.

Each changed row is updated individually and not as part of a transaction or batch operation. In addition, the
order in which the rows are processed is determined by their order in the DataTable.
To control explicitly the order of the operations for a specific table, we can use either the GetChanges
method or the Select method. Both methods are available at either the DataSet or the DataTable level.
We use these methods to return separate sets of rows, with each set matching a different row state.
Let's say that we want to use the daDepartments to update the Novelty database from the ds-EmployeeInfo
DataSet . Based on our requirements, we first need to do all the inserts, then all the updates, and only then
all the deletes. We could do so by calling the GetChanges method three times, specifying a different row
state each time. After each call to GetChanges, we call the DataAdapter's Update method, passing it the
DataTable returned by the GetChanges method.

dt = dsEmployeeInfo.Tables("Departments")
' Get each type of change and update accordingly
dtChanged = dt.GetChanges(DataRowState.Added)
daDepartments.Update(dtChanged)
dtChanged = dt.GetChanges(DataRowState.Modified)
daDepartments.Update(dtChanged)
dtChanged = dt.GetChanges(DataRowState.Deleted)
daDepartments.Update(dtChanged)

We can write this code more compactly as:

dt = dsEmployeeInfo.Tables("Departments")
' Get each type of change and update accordingly
daDepartments.Update (dt.GetChanges(DataRowState.Added))
daDepartments.Update (dt.GetChanges(DataRowState.Modified))
daDepartments.Update (dt.GetChanges(DataRowState.Deleted))

We could also achieve the same results by using the Select method:

dt = dsEmployeeInfo.Tables("Departments")
' Get each type of change and update accordingly
da.Update (dt.Select(Nothing, Nothing, _
DataViewRowState.Added))
da.Update (dt.Select(Nothing, Nothing, _
DataViewRowState.ModifiedCurrent))
da.Update (dt.Select(Nothing, Nothing, DataViewRowState.Deleted))

The advantage of using the Select method rather than the GetChanges method is that it allows for
additional filtering and sorting (if desired).
This is a good time to remind you of the difference between removing a row and deleting a row from a
DataTable, as discussed in Chapter 5. When you Remove a row, it is actually removed from the collection
and no longer exists. When you Delete a row, it isn't actually removed, but is marked for deletion.
Therefore, when using a DataTable together with a DataAdapter for data source updates, you should
always use the Delete method rather than the Remove method to remove a row. When the DataAdapter
encounters a row that has been marked as deleted, it knows to execute the DeleteCommand against the
database to synchronize it with the DataTable. However, if the Remove method is used, the DataAdapter
will never see the removed row when Update is called, and the row won't be deleted from the data source.

Setting the Update Commands


The DataAdapter doesn't automatically create the Insert, Update, and Delete SQL statements needed to
update the data source with the changes made to the data in the DataSet . If the Insert, Update, or Delete
command hasn't been set when the Update method needs to call them, an exception is thrown. You can
specify these commands in several ways.

Use the CommandBuilder object to generate automatically commands at run-time.


Explicitly program the commands in code.
Use the DataAdapter Design-Time Component and the DataAdapter Configuration Wizard.

Using the CommandBuilder Object


This approach is the simplest, but it's somewhat limited. It's similar to the BatchUpdate of ADO 2.X. By
linking a CommandBuilder object to a specific DataAdapter object, the CommandBuilder will
automatically generate the InsertCommand, UpdateCommand, and DeleteCommand properties needed by
that DataAdapter. If any of these properties are not null references (Nothing in Visual Basic), a command
object already exists for that property. The CommandBuilder won't override an existing command.

Note
As you might expect, a DataAdapter object must be specific to the .NET Data Provider that it is
working with. Therefore in our code we must use a specific derived object, such as
SqlDataAdapter, OledbDataAdapter, or OdbcDataAdapter.

For automatic command generation to work, the DataAdapter's SelectCommand must be set. The
CommandBuilder uses the table schema obtained by the SelectCommand's Select statement to generate
the corresponding Insert, Update, and Delete commands. Note that the columns returned by the
SelectCommand must include at least one primary key or unique column.
Modifying the CommandText of the Select statement after the update commands have been automatically
generated could cause exceptions to occur when one of the update commands is actually executed. If the
original Select statement that the command generation was based on contained columns that don't exist in
the modified statement, execution of one of the update commands by the DataAdapter's Update method
may try to access these nonexisting columns and cause an exception to be thrown. To avoid this problem
you should call the RefreshSchema method of the CommandBuilder after modifying the SelectCommand
property of the DataAdapter or after modifying the CommandText of that command object.

Note
Even after the CommandBuilder has generated Insert, Update, and/or Delete commands, the
corresponding properties of the DataAdapter are not modified. The CommandBuilder maintains
the generated command objects internally. You can obtain references to these objects by calling
the CommandBuilder's GetInsertCommand, GetUpdateCommand, or GetDeleteCommand
methods.

Although the CommandBuilder is simple and easy to use, understanding its limitations is important. The
main limitation is that you have no control over what it doesit isn't configurable. It simply generates the
update commands based on the provided Select statement, and there are no options. It is designed to
generate commands for a single, independent database table. In other words, DataTables filled with the
result of an SQL Join operation cannot be used with a CommandBuilder. Moreover, the commands are
generated without considering that the table may be related to other database tables, which may result in
foreign key violations when one of the database update operations is performed.

Another limitation of the CommandBuilder is that it won't generate the update commands if any table or
column name includes special characters such as a space, period, or other nonalphanumeric
characterseven if the name is delimited in brackets. However, fully qualified table names (such as
database.owner.table) are supported.
To retrieve and save data to the database with the DataAdapter, add another form to the DataSetCode
project.

1. First, add a button below the btnDataAdapterFill button. Set the Name property of the new button to
btnDataAdapterUpdates and set the Text property to DataAdapter Updates.
2. Add a new form, frmUpdates, to the DataSetCode project.
3. In the Properties window for frmUpdates, set its Text property to DataAdapter Updates.
4. Enlarge the size of frmUpdates.
5. From the Windows Forms tab of the Toolbox, add a DataGrid to frmUpdates and place it on the right
side of the form.
6. In the Properties window, set the Name property of the DataGrid to grdDataGrid.
7. Enlarge the DataGrid so that it covers about 80 percent of the area of the form.
8. Add a button from the Windows Forms tab of the Toolbox in the upper-left corner of frmUpdates.
9. In the Properties window, set the Name property of the new button to btnLoad and set the Text
property to Load.
10. Add a button below the Load button.
11. In the Properties window, set the Name property of the new button to btnUpdate, set the Text property
to Update, and set the Enabled property to False.
12. Open the frmUpdates form in Code View and add the following lines to the top of the file:

Imports System
Imports System.Data
Imports System.Data.SqlClient

13. Add the code shown in Listing 6.3 to frmUpdates.


Listing 6.3 Using the SqlCommandBuilder to generate automatically update commands to be used
by the SqlDataAdapter

Private dsEmployeeInfo As DataSet


Private daEmployees As SqlDataAdapter
Private conn As New SqlConnection( _
"server=localhost;uid=sa;pwd=;database=novelty")
' Use SqlCommandBuilder to automatically generate the update
' commands
Private cbEmployees As SqlCommandBuilder
Private Sub btnLoad_Click(ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles btnLoad.Click
dsEmployeeInfo = New DataSet ()

LoadCommandBuilder ()
'Config grid
Me.grdDataGrid.PreferredColumnWidth = 110
Me.grdDataGrid.AllowSorting = True
'Fill Data Set
daEmployees.Fill(dsEmployeeInfo, "Employees")
'Assign DataSet to DataGrid
Me.grdDataGrid.DataSource = _
dsEmployeeInfo.Tables("Employees")
Me.btnUpdate.Enabled = True
End Sub
Private Sub btnUpdate_Click(ByVal sender As System.Object, _
ByVal e As System.EventArgs)Handles btnUpdate.Click
daEmployees.Update(dsEmployeeInfo, "Employees")
End Sub
Private Sub LoadCommandBuilder ()
Dim param As SqlParameter
If conn.State = ConnectionState.Closed Then
conn.Open ()
End If
'Create New DataAdapter Object
Dim SQL As String
SQL = "Select FirstName, LastName, DepartmentID, Salary, ID from tblEmployee"
daEmployees = New SqlDataAdapter(SQL, conn)
'Use SqlCommandBuilder to automatically
'generate the update commands
cbEmployees = New SqlCommandBuilder(daEmployees)
End Sub

The main routine, LoadCommandBuilder, is called when the Load button is clicked on. This routine shows
how to open a connection object explicitly (and how to avoid an exception when clicking on the Load button
subsequent times) and how to set up the SqlDataAdapter (daEmployees) and SqlCommandBuilder
(cbEmployees) objects. These two objects are created and initialized with forms of their constructors that
accept the essential settings as parameters. The daEmployees constructor receives the Select string and
the connection object, whereas the cbEmployees receives the SqlDataAdapter object.

Note
Only one DataAdapter or CommandBuilder object can be linked with each other at any given
time.

All that remains to be done is to configure the grid, set the grid's DataSource property to the Employees
table in the DataSet , and call the Fill method to load the DataSet and have the grid automatically
display the data.
The Click event handler for the Update button contains a single line of code, which simply calls the Update
method of the daEmployees DataAdapter.
Run the DataSetCode project and then click on the DataAdapter Updates button on the frmDataSets form.
When the frmUpdates form is displayed, click on the Load button. That will cause the data to be read from
the tblEmployee database table, loaded into the Employee Data-Table in the dsEmployeeInfo DataSet, and
displayed in the gridas shown in Figure 6.2.
Figure 6.2. Data from tblEmployee displayed in a DataGrid

You can now test this form on your own. Go ahead and make whatever changes you like. Add rows by
scrolling down to the last row of the grid, delete rows by selecting one or more rows and then pressing the
Delete key, or change column values by editing values within the grid cells. Remember that these changes
are not saved to the database until you click on the Update button. You can verify your changes by using any
of your favorite tools for viewing database tables or by just clicking on the Load button to cause the fetching
and reloading of the database data into the form's DataSet and grid.

Note
Although using the CommandBuilder to generate the required update commands requires a bare
minimum of code, it does have a significant downside, even if its limitations don't pose a problem.

The CommandBuilder must make an additional round-trip to the database server to retrieve the
metadata that it needs to generate the commands. This capability is very useful and flexible when
you're developing queries on the fly. However, if the queries are already known at design time,
explicitly specifying the commands and their parameters in code, using either explicit update
commands or the DataAdapter Configuration Wizard, will result in better performance.

Explicit Update Commands


If using the CommandBuilder is the extreme in simplicity for generating the required update commands,
explicitly programming these commands in code is the extreme in flexibilitybut a lot more work. Each of
the four commands (Select, Insert, Update, and Delete) must be designed and hand-coded. More often than
not, once you've begun the effort to code them explicitly, you'll go all the way and write SQL Server stored
procedures for each of the commands.
Listing 6.4 shows the SQL Server script for generating the four stored procedures. The SelectEmployee
stored procedure (SP) simply selects all the columns from the tblEmployee table. The InsertEmployee SP
expects four parametersone for each of the updatable columns. The ID column isn't updatable because it's
an identity column. The UpdateEmployee SP expects the same four parameters for the updatable columns,
plus a fifth parameter containing the original value of the ID column. This value of the ID column is used in
the WHERE clause to select correctly the row to be updated (based on the primary key). The
DeleteEmployee SP requires only the original value of the ID column as a parameter to select the correct
row to be deleted.
Listing 6.4 SQL Server script to create stored procedures for the tblEmployee table

IF EXISTS (SELECT * FROM sysobjects WHERE name =


'SelectEmployees' AND user_name(uid) = 'dbo')
DROP PROCEDURE [dbo].[SelectEmployees];
GO
CREATE PROCEDURE [dbo].[SelectEmployees]
AS
SET NOCOUNT ON;
SELECT FirstName, LastName, DepartmentID, Salary, ID FROM
tblEmployee;
GO
IF EXISTS (SELECT * FROM sysobjects WHERE name = 'InsertEmployee'
AND user_name(uid) = 'dbo')
DROP PROCEDURE [dbo].[InsertEmployee];
GO
CREATE PROCEDURE [dbo].[InsertEmployee]
(
@FirstName varchar (50),
@LastName varchar (70),
@DepartmentID int,
@Salary money
)
AS
SET NOCOUNT OFF;

INSERT INTO tblEmployee(FirstName, LastName, DepartmentID, Salary)


VALUES (@FirstName, @LastName,_@DepartmentID, @Salary);
GO
IF EXISTS (SELECT * FROM sysobjects WHERE name = 'UpdateEmployee'
AND user_name(uid) = 'dbo')
DROP PROCEDURE [dbo].[UpdateEmployee];
GO
CREATE PROCEDURE [dbo].[UpdateEmployee]
(
@FirstName varchar (50),
@LastName varchar (70),
@DepartmentID int,
@Salary money,
@Original_ID int
)
AS
SET NOCOUNT OFF;
UPDATE tblEmployee SET FirstName = @FirstName, LastName = @LastName,
DepartmentID = @DepartmentID, Salary = @Salary
WHERE (ID = @Original_ID);GO
IF EXISTS (SELECT * FROM sysobjects WHERE name = 'DeleteEmployee'
AND user_name(uid) = 'dbo')
DROP PROCEDURE [dbo].[DeleteEmployee];
GO
CREATE PROCEDURE [dbo].[DeleteEmployee]
(
@Original_ID int
)
AS
SET NOCOUNT OFF;
DELETE FROM tblEmployee WHERE (ID = @Original_ID);
GO

Let's now return to our application code. First, we change the first line of code in the routine btnLoad_Click
so that, instead of calling LoadCommandBuilder, it calls LoadExplicitCode. Also, some debugging was needed
when we developed the explicit updates code, so we added a Try-Catch block to the btnUpdate_Click
routine. It now looks like this:

Private Sub btnUpdate_Click(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles btnUpdate.Click
Try
daEmployees.Update(dsEmployeeInfo, "Employees")
Catch es As SqlException
MessageBox.Show(es.Message)
End Try
End Sub

Finally, the code for setting the commands for the daEmployees DataAdapter is shown in Listing 6.5.
Listing 6.5 Routine to set up the four customs commands for the dsEmployees DataAdapter

Private Sub LoadExplicitCode ()


Dim param As SqlParameter
If conn.State = ConnectionState.Closed Then
conn.Open ()
End If
'Create New DataAdapter Object
daEmployees = New SqlDataAdapter ()
'Set up custom Select Command (SP)
daEmployees.SelectCommand = New SqlCommand ()
With daEmployees.SelectCommand
.Connection = conn
.CommandType = CommandType.StoredProcedure
.CommandText = "SelectEmployees"
End With
'Set up custom Insert Command (SP)
daEmployees.InsertCommand = New SqlCommand ()
With daEmployees.InsertCommand
.Connection = conn
.CommandType = CommandType.StoredProcedure
.CommandText = "InsertEmployee"
End With
param = daEmployees.InsertCommand.Parameters.Add( _
New SqlParameter("@FirstName", SqlDbType.VarChar, 50))
param.Direction = ParameterDirection.Input
param.SourceColumn = "FirstName"
param.SourceVersion = DataRowVersion.Current
param = daEmployees.InsertCommand.Parameters.Add( _
New SqlParameter("@LastName", SqlDbType.VarChar, 70))
param.Direction = ParameterDirection.Input
param.SourceColumn = "LastName"
param.SourceVersion = DataRowVersion.Current
param = daEmployees.InsertCommand.Parameters.Add( _
New SqlParameter("@DepartmentID", SqlDbType.Int))
param.Direction = ParameterDirection.Input
param.SourceColumn = "DepartmentID"
param.SourceVersion = DataRowVersion.Current
param = daEmployees.InsertCommand.Parameters.Add( _
New SqlParameter("@Salary", SqlDbType.Money))
param.Direction = ParameterDirection.Input
param.SourceColumn = "Salary"
param.SourceVersion = DataRowVersion.Current

'Set up custom Update Command (SP)


daEmployees.UpdateCommand = New SqlCommand ()
With daEmployees.UpdateCommand
.Connection = conn
.CommandType = CommandType.StoredProcedure
.CommandText = "UpdateEmployee"
End With
param = daEmployees.UpdateCommand.Parameters.Add( _
New SqlParameter("@FirstName", SqlDbType.VarChar, 50))
param.Direction = ParameterDirection.Input
param.SourceColumn = "FirstName"
param.SourceVersion = DataRowVersion.Current
param = daEmployees.UpdateCommand.Parameters.Add( _
New SqlParameter("@LastName", SqlDbType.VarChar, 70))
param.Direction = ParameterDirection.Input
param.SourceColumn = "LastName"
param.SourceVersion = DataRowVersion.Current
param = daEmployees.UpdateCommand.Parameters.Add( _
New SqlParameter("@DepartmentID", SqlDbType.Int))
param.Direction = ParameterDirection.Input
param.SourceColumn = "DepartmentID"
param.SourceVersion = DataRowVersion.Current
param = daEmployees.UpdateCommand.Parameters.Add( _
New SqlParameter("@Salary", SqlDbType.Money))
param.Direction = ParameterDirection.Input
param.SourceColumn = "Salary"
param.SourceVersion = DataRowVersion.Current
param = daEmployees.UpdateCommand.Parameters.Add( _
New SqlParameter("@Original_ID", SqlDbType.Int))
param.Direction = ParameterDirection.Input
param.SourceColumn = "ID"
param.SourceVersion = DataRowVersion.Original
'Set up custom Delete Command (SP)
daEmployees.DeleteCommand = New SqlCommand ()
With daEmployees.DeleteCommand
.Connection = conn
.CommandType = CommandType.StoredProcedure
.CommandText = "DeleteEmployee"
End With
param = daEmployees.DeleteCommand.Parameters.Add( _
New SqlParameter("@Original_ID", SqlDbType.Int))
param.Direction = ParameterDirection.Input
param.SourceColumn = "ID"
param.SourceVersion = DataRowVersion.Original
End Sub

Note
The code to assign the values to each of the parameter objects could be written in a more
compact way (fewer lines of code) by calling a different overloaded form of the Add method. That
alternative form accepts values for all the required parameter properties in a single method call
with a long list of parameters. The advantage of the approach used in the preceding code listing is
that it is far more readable than the alternative.

The code in the routine LoadExplicitCode is rather long but is basically straightforward. It is easy to
understand, once the interface (parameters and types) to the stored procedures have been determined. For
each of the Command properties, a new instance of a SQLCommand object is created. We assign the common
Connection object to it and set the CommandType and CommandText properties. We then need to create
and configure all the command parameters required for each command.
Go ahead and play with the frmUpdates form, as you did before, to verify that it is working correctly.
Although it should seem to be working the same as before, the difference is that by calling LoadExplicitCode
it is using our custom commands to perform the database updates. This approach requires some more
coding on our part, but it offers the advantages of greater flexibility, improved performance, and centralized
management of database stored procedures.

Embedding Business Logic in the Update Commands


We mentioned earlier how using stored procedures as custom update commands for the DataAdapter
allows us to embed some business logic in the stored procedures that are automatically invoked. The
innovation here, compared to previous versions of ADO and other data access models, isn't the embedding
of logic in stored proceduresthat has always been done. Rather, it is that these stored procedures are
invoked automatically when performing a "batch" update, instead of having to be invoked explicitly from the
application code.
To see how this approach works, all you need do is modify the stored procedure. Let's assume that our
business logic says the following: If we are inserting a new employee record and the Salary column is either
null (empty) or set to 0, we automatically set the employee's salary as a function of her department. We use
some simple logic for this task. The employee's automatically assigned salary will be the employee's
departmentID times $10,000. (We hope that your company uses a better algorithm for salary assignment!)
The modified stored procedure now looks like this:

CREATE PROCEDURE dbo.InsertEmployee


(
@FirstName varchar(50),
@LastName varchar(70),
@DepartmentID int,
@Salary money
)
AS
SET NOCOUNT OFF;
if (@Salary = 0 or @Salary is null)
begin

Do complicated salary calculations


set @Salary = @DepartmentID * 10000
end
INSERT INTO tblEmployee(FirstName, LastName,
DepartmentID, Salary) VALUES
(@FirstName, @LastName, @DepartmentID, @Salary)
GO

Note
Because the InsertEmployee stored procedure already exists, you need to delete (drop) the
existing stored procedure or change the first line in the script to

ALTER PROCEDURE dbo.InsertEmployee

if you want to run the preceding script from the SQL Server Query Analyzer.

You can now run the DataSetCode project without making any changes to the application code. Add new
employee records on the frmUpdates form and verify that the stored procedure is assigning the correct
salaries to the automatically inserted rows.

Using the DataAdapter Design-Time Component


After dealing with all that code in the preceding section, we bet that you're probably wishing for some pointand-click code generation. Well, get ready for the DataAdapter Configuration Wizard!
The DataAdapter Configuration Wizard offers an array of options for configuring a DataAdapter object that
is added to a form as a design-time component. It actually is more advanced than the design-time
components that we've discussed so far. It not only provides a graphical interface to set many of the internal
objects and properties of the component, but it also actually offers various options that affect the code that
it automatically generates. Begin by doing the following:

1. Open the form frmUpdates in the form designer.


2. From the Data tab of the toolbox, drag a SqlDataAdapter component onto the design sur-face of
frmUpdates. Because this component isn't visible at run-time, it will appear in the component tray
beneath the form's design surface. The DataAdapter Configuration Wizard will automatically begin.
Click on the Next button on the Welcome dialog box to proceed.
3. On the Data Connection dialog box, select a connection to the Novelty database. If none currently
exists, you can create one by clicking on the New Connection button that displays the standard OLEDB
Data Link properties tabbed dialog box. Once you have selected a valid connection, click on the Next
button.
4. For the Query Type, you can specify either SQL statements, the creation of new procedures, or the use
of existing stored procedures. Let's go with the default of using SQL statements, although in practice
you may find it useful to have the wizard generate stored procedures for you. Click on the Next button.
5.

4.

5. The next dialog box is where you enter the Select statement to be used by the DataAdapter and as
the basis for the other three update statements, if they are generated. Enter the following into the
textbox:

SELECT FirstName, LastName, DepartmentID, Salary, ID FROM tblEmployee

There are two additional buttons on this dialog. The Advanced Options button displays a dialog
box with options that control how the commands are generated. The first checkbox specifies
whether the three update commands should be generated (or whether you're just using the
DataAdapter to fill a DataSet ). The second checkbox specifies whether the Update and Delete
commands should include a WHERE clause that detects whether the record has been modified at
the database because it was originally loaded into the DataSet . The third checkbox specifies
whether a Select statement should be appended to the Insert and Update statements in order to
return the row filled with column values that are calculated at the server, such as Identity column
values and default column values. The Query Builder button displays a standard query builder
window, in order to graphically design the Select query statement instead of directly entering it
into the textbox as we did in the step above.
6. Click on the Next button to see a summary of the steps taken by the wizard.
7. Click on the Finish button to apply the settings to the DataAdapter component.

Note
Once the DataAdapter component has been created, you can modify its properties and settings
either through the Properties window or by running the Configuration Wizard again. The wizard
can be started for an existing DataAdapter component by right-clicking on the component in the
component tray and then selecting the Configure Data Adapter menu item. You can also restart
the wizard by selecting the component in the component tray and then clicking on the Configure
Data Adapter link in the Properties window (on the pane between the properties list and the
Description pane).
Like the CommandBuilder, the DataAdapter wizard is designed to generate commands for a
single, in dependent database table. However, this wizard offers several configuration options,
such as using existing or new stored procedures, which make it very flexible and useful for writing
production code.

We now need to link the DataAdapter, automatically named SqlDataAdapter1, to our existing program. For
consistency, we also explicitly open the created connection, named Sql-Connection1, in our code.
The routine btnLoad_Click needs to be modified so that it calls LoadWizardCode instead of
LoadExplicitCode. In addition, it needs to call the Fill method of our newly created DataAdapter. The
routine btnUpdate_Click also needs to be modified so that it uses the new SqlDataAdapter1 component.
Finally, we need to add the LoadWizardCode routine, whose only remaining task is to open the new
connection. These three routines are shown in Listing 6.6.
Listing 6.6 Modified and added routines to use the SqlDataAdapter component in the existing

application.

Private Sub btnLoad_Click(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles btnLoad.Click
dsEmployeeInfo = New DataSet()
'LoadCommandBuilder()
'LoadExplicitCode()
LoadWizardCode()
'Config grid
Me.grdDataGrid.PreferredColumnWidth = 110
Me.grdDataGrid.AllowSorting = True
'Fill Data Set
'daEmployees.Fill(dsEmployeeInfo, "Employees")
SqlDataAdapter1.Fill(dsEmployeeInfo, "Employees")
'Assign DataSet to DataGrid
Me.grdDataGrid.DataSource = _
dsEmployeeInfo.Tables("Employees")
Me.btnUpdate.Enabled = True
End Sub
Private Sub btnUpdate_Click(ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles btnUpdate.Click
Try
'daEmployees.Update(dsEmployeeInfo, "Employees")
SqlDataAdapter1.Update(dsEmployeeInfo, "Employees")
Catch es As SqlException
MessageBox.Show(es.Message
End Try
End Sub
Private Sub LoadWizardCode()
If SqlConnection1.State = ConnectionState.Closed Then
SqlConnection1.Open()
End If
End Sub

In case you're wondering what code the wizard looks like, you can view it by opening frm-Updates in the
Visual Studio code editor and expanding the Windows Form Designer generated code region. It is
conceptually very similar to the code we wrote ourselves in Listing 6.5.

Note
We haven't changed the default object names assigned by the DataAdapter Configuration Wizard,

such as SqlDataAdapter1 and SqlSelectCommand1 . However, you can change the names to be
more meaningful or to conform to your specific naming conventions. The design-time component
names can be changed by selecting the component in the component tray beneath the form's
design surface and then setting the Name property in the Properties window.
To change the names of the individual commands (such as SelectCommand and
InsertCommand), continue in the Properties window for the DataAdapter component. Locate the
command that you want to modify and click on the "+" to expand the desired command object.
Doing so exposes all the properties of that command object, allowing you to modify the name and
other properties, as desired.

As you've done with the previous versions of the DataSetCode project using the CommandBuilder and the
explicit SQL commands, run the program to verify that this version also works correctly. Be sure to
remember how quickly we generated this fully functional code. You'll appreciate it even more if you also
have it generate stored procedures for you!
Using the DataAdapter design-time component offers an additional useful feature. It is the Preview Data
feature, whereby you can see the data returned by the Select statement of the DataAdapter at design time.
To use it, right-click on the DataAdapter component in the component tray and select the Preview Data
item from the menu displayed. You can also display this dialog window by selecting the component in the
component tray and then clicking on the Preview Data link in the Properties window (on the pane between
the properties list and the Description pane). Select the desired DataAdapter from the DataAdapter's
listbox and then click on the Fill DataSet button. The results for the SqlDataAdapter that we just added to
frm-Updates are shown in Figure 6.3.
Figure 6.3. Data from tblEmployee displayed in the DataAdapter Preview window

Business Case 6.1: Combining Multiple Related Tables


As we've pointed out in this chapter, none of the available techniques for specifying update commands easily
lend themselves to updating multiple tables. This is particularly true if there is a Relation defined between
them, as in the case where a parent-child relationship exists between the tables. Does that mean that
ADO.NET can't handle such a situation? The fearless database developer at Jones Novelties, Incorporated,
used the capabilities of ADO.NET to prove that such isn't the case. She developed a form that displays and
allows updates to both customer and related order information. It also illustrates the use of a batch of SQL
Server commands to fill both tables with only a single round-trip to the server. To build the form, she
proceeded to:

1.
2.
3.
4.
5.

Launch Visual Studio.NET


Create a new Visual Basic Windows Application project
Name the project BusinessCase6
Specify a path for where she wants the project files to be saved
Enlarge the size of Form1 and set its Name property to frmCustomersOrders and its Text property to
Customers and Orders
6. Add a button named btnFill with Text property set to Fill, a button named btnUpdate with Text
property set to Update, and a DataGrid named grdCustomersOrders
7.

6.
7. Arrange the controls as shown in Figure 6.4
Figure 6.4. Arrangement of the controls on frmCustomersOrders

After adding

Imports System.Data
Imports System.Data.SqlClient

at the top of the file, she adds the code shown in Listing 6.7 within the body of the class definition for
frmCustomersOrders.
Listing 6.7 Routines to load and update multiple related tables

Private ds As DataSet
Private cn As New SqlConnection( _
"server=localhost;uid=sa;database=Novelty")
Private Sub btnFill_Click(ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles btnFill.Click
Dim da As New SqlDataAdapter()
grdCustomersOrders.DataSource = Nothing
ds = New DataSet()
' Set up batch select command
da.SelectCommand = New SqlCommand()

da.SelectCommand.Connection = cn
da.SelectCommand.CommandType = CommandType.Text
da.SelectCommand.CommandText = _
"select * from tblCustomer; select * from tblOrder"
' Table mappings for clear names
da.TableMappings.Add("Table", "Customers")
da.TableMappings.Add("Table1", "Orders")
' Load Data
da.Fill(ds)
' Manually add relation ds.Relations.Add("Customer_Orders", _
ds.Tables("Customers").Columns("ID"), _
ds.Tables("Orders").Columns("CustomerID"))
' Display the data
grdCustomersOrders.DataSource = ds
End Sub
Private Sub btnUpdate_Click(ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles btnUpdate.Click
' create and config DataAdapters
Dim daCustomers As New SqlDataAdapter( _
"select * from tblCustomer", cn)
Dim daOrders As New SqlDataAdapter( _
"select * from tblOrder", cn)
Dim cbCustomers As New SqlCommandBuilder(daCustomers)
Dim cbOrders As New SqlCommandBuilder(daOrders)
Try
'Now, table by table, in "correct" order
Dim ChangedTable As New DataTable()
' Deleted rows in child table
ChangedTable = _
ds.Tables("Orders").GetChanges(DataRowState.Deleted)
If Not ChangedTable Is Nothing Then
daOrders.Update(ChangedTable)
End If
' All changed rows in parent table
ChangedTable = ds.Tables("Customers").GetChanges
If Not ChangedTable Is Nothing Then
daCustomers.Update(ChangedTable)
End If
' Added or modified rows in child table
ChangedTable = _
ds.Tables("Orders").GetChanges(DataRowState.Added _
Or DataRowState.Modified)

If Not ChangedTable Is Nothing Then


daOrders.Update(ChangedTable)
End If
Catch ex As Exception
MessageBox.Show(ex.Message)
End Try
End Sub

The first routine, btnFill_Click, reads both tables from the database in a single round-trip, by executing
a batch of SQL Server commands. The different commands are separated by a semicolon (';') in the
CommandText string.
The DataSet default table names of Table and Table1 are mapped to the more meaningful names of
Customers and Orders, in the lines

' Table mappings for clear names


da.TableMappings.Add("Table", "Customers")
da.TableMappings.Add("Table1", "Orders")

Note
We discuss table and column mappings in detail in Chapter 7.

After the DataSet ds has been filled with the data, a DataRelation is created to link the two tables, with
the Customers table being the parent table and the Orders table being the child table. The last line of code in
the routine binds the DataSet to the grid to display the data.
The second routine, btnUpdate_Click, causes changes in both tables to be updated to the database. Here
the data integrity of a parent-child relationship must be ensured. Unfortunately, that doesn't happen
automatically. Jones's database developer needs to group types of changes and then execute them in the
correct order. For two tables that have a parent-child relationship, she should execute the changes in the
following order.

1. Delete rows in the child table


2. Insert, update, and delete rows in the parent table
3. Insert and update rows in the child table
To obtain the appropriate changes, the routine makes calls to the GetChanges method of the appropriate
table. It specifies the desired filter on the row state, as required. Each call to GetChanges returns a
DataTable containing only those rows that have been changed (subject to the row state filter). If there are
no changed rows, Nothing is returned. As long as there is at least one changed row, the DataAdapter's
Update method is called to actually update the database. The code in this function is wrapped inside a TryCatch block in case any errors occur while trying to update the database.

1.

Now it is time to check out the database developer's form. To do so, follow these steps.

1. Run the BusinessCase6 project and then click on the Fill button. Doing so causes the DataSet to be
filled with the data from the Novelty database. However, as the line of code

grdCustomersOrders.DataSource = ds

binds the entire DataSet , rather than a specific DataTable, to the grid, all that is displayed is
the beginnings of the grid and "+", as shown in Figure 6.5, indicating that you can expand what
is being displayed.
Figure 6.5. The initial display of frmCustomersOrders, after filling the DataSet with data

2. Click on the "+" to expand the grid view. The grid now displays two Weblike links, one for each table in
the DataSet .
3. Click on the Customers link. The grid now displays the rows of data in the Customers table. Note that
each row in the Customers table has a "+" to its left, indicating that the table is related to one or more
other tables. Clicking on the "+" expands the list of DataRelations for that table. In this case, only
one link, for the relation Customer_Orders, was created in the routine btnFillClick , as shown in
Figure 6.6.
Figure 6.6. CustomerOrders Relation link for a row in the Customers table

4. Click on the Customer_Orders link for the first row. This uses the definition of the Customer_Orders
DataRelation to fetch and display the rows in the Orders table that are related to the current
Customers row.

Note
At any point while navigating through DataTables and DataRelations in the grid, you can
retrace your steps by pressing the Navigate Back arrow at the top of the grid.
No Orders rows should be displayed, because the Jones Novelties company still has its orders data in
an Access database and hasn't yet moved them to the new SQL Server database. Jones wants to test
the new system being developed, so he will have his developer input test data via this form. Doing so
will not only create test data, but it will also verify that new rows can be inserted! The grid does
present a new row to be inserted into the table. The CustomerID field is already set to the value of 1
because the grid recognized that it is the value in the related customer row. Go ahead and add values
for the OrderDate and Amount fields. There is no point in adding a value for the ID field because it is
an identity column and will automatically be assigned a value by the database when the record is
inserted.
5. Click on the Update button to execute the routine btnUpdate_Click from Listing 6.7 and cause the
database to be updated.
6. You can verify that the update (New row added) was made against the database by clicking on the Fill
button to cause the data from the database to be reloaded in the DataSet and grid. Navigate to the
Order table of the first Customer row, and the new row that you just inserted should be there!
Feel free to make additional changes to the database by adding, deleting, and modifying rows in both tables
and verifying that the updates indeed were performed.

Note
If you're wondering why you can successfully delete a Customer row even though it still contains
Orders rows, it is because the default behavior of the ForeignKeyConstraint created by the
Customer_Orders Relation is to cascade deletions (and updates) made to a parent table down to
the child table.

[ Team LiB ]

[ Team LiB ]

Summary
In this chapter we took a close look at the DataAdapter, which is a principal ADO.NET object. The
DataAdapter is the bridge between the disconnected world of the DataSet and its associated objects and
the connected world of the .NET Data Provider objects that actually connect and communicate with a
physical data source.
The DataAdapter is used to fill a DataSet with data from a data source, using either explicit commands or
stored procedures. The DataAdapter also provides automatic updating to the data source of the changes
made to the DataSet , while also allowing for complete customization of the commands to be used for the
Insert, Update, and Delete operations against the data source.

Questions and Answers

Q1:

Sometimes I want to be able to create and populate a set of data records


programmatically and then update the database with this data. When I tried this with
ADO 2.X, I could create the data records, but I was unable to update the database. Is
there any change in this area in ADO.NET?

A1:

You bet there is! As we discussed in Chapter 5, the DataSet is a container for data, but it
doesn't know or care where that data came from. When you want to update a database with the
data from a DataSet , just connect it to an appropriately configured DataAdapter, call the
DataAdapter's Update method, and the database will be updated. This is true even if the
DataSet data is created programmatically, rather than by using the DataAdapter's Select
command to fill the DataSet . When you're ready to push the DataSet 's data into a data source,
connect it to a properly configured DataAdapter and perform the update.

[ Team LiB ]

[ Team LiB ]

Chapter 7. ADO.NETAdditional Features and Techniques


IN THIS CHAPTER

Detecting Concurrency Conflicts


Table and Column Mappings

DataViews
Strongly Typed DataSets
In the previous several chapters we took a close look at the ADO.NET data access architecture and approach.
We also discussed and illustrated the use of many of the ADO.NET objects, including their main properties
and methods. In this chapter we take a look at four additional features and techniques of ADO.NET, which
usually don't come into play until you start writing real production code. We gather and present these topics
to help you get a headstart on developing applications.

[ Team LiB ]

[ Team LiB ]

Detecting Concurrency Conflicts


If you've written (or even used) a multiuser database application, you've probably run into concurrency
conflicts. They arise when multiple users try to modify the same database data rows at the same time. That
is, User A reads some data, User B reads the same data, and User A modifies that data. Now User B comes
along and also wants to modify the same data. How do you deal with this problem? There are two basic
approaches to handling concurrency control.
The first approachpessimistic concurrency control, or lockingessentially prevents the problem of User B
overwriting the changes of User A by never letting the situation get to that point. In this approach, when a
user reads data with the intention of modifying it, a lock is placed on that data. This lock makes that data
unavailable to other users until the first user has completed his task and releases the lock.
Such an approach is useful when there is a lot of contention for the same data or when a user must always
be able to see the most up-to-date values of the data. A typical scenario might be a real-time inventory
management or order management system, where the user doesn't want to accept any order unless she is
positive that the item(s) are in stock.
The major disadvantages of pessimistic concurrency control are the extra overhead of constantly managing
the data locks, the need for a continuous connection to the database, and the lack of scalability. This
approach has scalability problems, especially when used in a distributed environment (such as an Internet),
wherein users may end up locking records for many seconds or even minutes.
The second approach is optimistic concurrency control, or locking. In this approach, no data rows are locked,
except for the very short span of time when the data is actually being updated. This approach avoids the
issue of lock management and the problem of scalability, and it works just fine when a user is performing
editing operations while disconnected from the database. However, what happens when User B returns to
update data that has already been modified by User A? One option is to say that the last update is the only
one that counts. However, there are not too many applications for this option to be feasible policy.
What you need to do with optimistic concurrency is to detect whether the data has been modified since it
was originally retrieved, called a concurrency violation. There are two basic approaches to implementing
detection. The first is to maintain a timestamp or unique version number for each row, which is updated
whenever the row is modified. The original value of the timestamp or version number is included as part of
the WHERE clause of the update statement. The second approach is to save the original values of the fields.
These values become additional conditions in the WHERE clause of the update statement. In either case, if
the original row has been changed, the condition in the WHERE clause won't be met, the row won't be found,
and no row will be updated.

Note
If you used optimistic locking with ADO 2.X and wondered why the error message associated with
a concurrency violation refers to not being able to find the specified row, rather than stating that
there was a concurrency violation, now you know why.

ADO.NET supports only optimistic concurrency controlthere is currently no built-in support for pessimistic
locking. Visual Studio offers several options for implementing optimistic concurrency. This support is in line
with the general pattern of extensive support for distributed, disconnected, and asynchronous application
architectures.
The SQL statements for the Update and Delete commands generated by both the CommandBuilder and the
DataAdapter Configuration Wizard both include a WHERE clause that detects concurrency conflicts. Let's take
at look at the relevant code that we generated in Chapter 6 with the DataAdapter Configuration Wizard. You
can obtain the generated code by expanding the Windows Form Designer generated code region when
viewing the form frmUpdates in the code window. First, look at the SQL Update statement (reformatted for
easier reading) in Listing 7.1 .
Listing 7.1 The SQL Update Statement generated by the DataAdapter Configuration Wizard

UPDATE tblEmployee
SET FirstName = @FirstName,
LastName = @LastName,
DepartmentID = @DepartmentID,
Salary = @Salary
WHERE
(ID = @Original_ID) AND
(DepartmentID = @Original_DepartmentID OR
@Original_DepartmentID IS NULL AND DepartmentID IS NULL) AND
(FirstName = @Original_FirstName) AND
(LastName = @Original_LastName) AND
(Salary = @Original_Salary OR @Original_Salary IS NULL AND Salary IS NULL)
;
SELECT FirstName, LastName, DepartmentID, Salary, ID
FROM tblEmployee WHERE (ID = @ID)

It starts as a standard Update statement, setting the values of the four updatable columns to the new values
passed as parameters to the UpdateCommand object. The WHERE clause contains the primary key field (ID),
as well as the original values of each of the other columns, and tests to see if these original values match the
current values for the row in the database. This generated statement goes even further and checks for NULL
values in both the database and current values for columns that are nullable.
A Select statement (the one we specified when configuring the DataAdapter ) follows the semicolon. The
semicolon is the separator between commands in a batch statement, and the Select statement is added by
default to return the refreshed row to the application.
Let's now look at the code for setting the parameters for the UpdateCommand object, as shown in Listing 7.2
.
Listing 7.2 Code to set command parameters generated by the DataAdapter Configuration Wizard

Me.SqlUpdateCommand1.Parameters.Add(New System.Data.SqlClient.SqlParameter ("@FirstName",


System.Data.SqlDbType.VarChar, 50, "FirstName"))

Me.SqlUpdateCommand1.Parameters.Add(New System.Data.SqlClient.SqlParameter ("@LastName",


System.Data.SqlDbType.VarChar, 70, "LastName"))
Me.SqlUpdateCommand1.Parameters.Add(New System.Data.SqlClient.SqlParameter (
"@DepartmentID", System.Data.SqlDbType.Int, 4, "DepartmentID"))
Me.SqlUpdateCommand1.Parameters.Add(New System.Data.SqlClient.SqlParameter ("@Salary",
System.Data.SqlDbType.Money, 8, "Salary"))
Me.SqlUpdateCommand1.Parameters.Add(New System.Data.SqlClient.SqlParameter (
"@Original_ID", System.Data.SqlDbType.Int, 4, System.Data.Parameter Direction.Input,
False, CType(0, Byte), CType(0, Byte), "ID", System.Data.DataRowVersion.Original,
Nothing))
Me.SqlUpdateCommand1.Parameters.Add(New System.Data.SqlClient.SqlParameter (
"@Original_DepartmentID", System.Data.SqlDbType.Int, 4, System.Data.Parameter Direction.
Input, False, CType(0, Byte), CType(0, Byte), "DepartmentID", System.Data.DataRowVersion.
Original, Nothing))
Me.SqlUpdateCommand1.Parameters.Add(New System.Data.SqlClient.SqlParameter (
"@Original_FirstName", System.Data.SqlDbType.VarChar, 50, System.Data.ParameterDirection.
Input, False, CType(0, Byte), CType(0, Byte), "FirstName", System.Data.DataRowVersion.
Original, Nothing))
Me.SqlUpdateCommand1.Parameters.Add(New System.Data.SqlClient.SqlParameter (
"@Original_LastName", System.Data.SqlDbType.VarChar, 70, System.Data.ParameterDirection.
Input, False, CType(0, Byte), CType(0, Byte), "LastName", System.Data.DataRowVersion.
Original, Nothing))
Me.SqlUpdateCommand1.Parameters.Add(NewSystem.Data.SqlClient.SqlParameter (
"@Original_Salary", System.Data.SqlDbType.Money, 8, System.Data.Parameter Direction.Input,
False, CType(0, Byte), CType(0, Byte), "Salary", System.Data.DataRowVersion.Original,
Nothing))
Me.SqlUpdateCommand1.Parameters.Add(New System.Data.SqlClient.SqlParameter ("@ID", System.
Data.SqlDbType.Int, 4, "ID"))

Ten command parameters are defined for this command object. The first four are the current (possibly
modified) values of the columns that are to be updated to the row in the database. Remember, we discussed
earlier in Chapter 5 that each row maintains as many as four different versions of the values for that row. By
default, if you don't specify otherwise, you receive the current value for the column that you read.
The next five parameters are the original values of all of the columns used as the values in the WHERE clause.
Note that, to retrieve the original value of a column (rather than the default current value), you need to
specify the row version as

System.Data.DataRowVersion.Original

in the constructor for the SqlParameter added to the command object.

Note
You don't have to include the original values of all of the columns in the WHERE clause. You can
customize any of the update command objects, so you may decide that, when updating a row, you
need only be alerted if another user modified one or two specific columns. But you can go ahead
and update the database if one of the other columns was modified.

The last parameter is the current value of the ID column, used as the parameter for the Select statement
used to bring back the updated values of the row.
After each insert, update, or delete operation, the DataAdapter examines the number of rows affected by
the operation. If the number of rows affected is zero, it throws the DBConcurrency-Exception exception
because it assumes that this outcome is usually the result of a concurrency violation. We could add an
exception handler for this to our Try-Catch block in the routine btnUpdate_Click, as shown in Listing 7.3 .
Listing 7.3 Try-Catch block with exception handler for DBConcurrencyException

Private Sub btnUpdate_Click(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles btnUpdate.Click
Try
daEmployees.Update(dsEmployeeInfo, "Employees")
'SqlDataAdapter1.Update(dsEmployeeInfo, "Employees")
Catch ec As DBConcurrencyException
'Do something !!
Catch es As SqlException
MessageBox.Show(es.Message)
End Try
End Sub

[ Team LiB ]

[ Team LiB ]

Table and Column Mappings


The DataAdapter object maintains a collection of DataTableMapping objects, which it exposes as the
TableMappings property. The purpose of these mappings is to allow the mapping of table names and/or
column names from their names in the data source to their names in the DataSet . Of course, once set up,
this method works in both directionsboth when reading from the data source to the DataSet and when
writing from the DataSet to the data source.
In our examples featuring use of the Fill method of the DataAdapter, we would specify both the DataSet
and the name of the table in the DataSet to be filled with the data being read, such as:

daEmployees.Fill(dsEmployeeInfo, "Employees")

However, the second argument to the Fill method is really the name of a TableMapping. The DataAdapter
looks to see if it has a defined mapping with that name and, if it does, it uses the information there to
complete the Fill operation. However, if it doesn't have such a mapping, it creates a table with the name of
the passed parameter and fills that with the data.
What that means is that, if we add a mapping named MappingName and map it to a table named
empDataSetTable with the line

daEmployees.TableMappings.Add("MappingName", "empDataSetTable")

we could then call the Fill method with the line

daEmployees.Fill(dsEmployeeInfo, "MappingName")

Doing so would cause the data being read to be filled into empDataSetTable in the DataSet
dsEmployeeInfo.
Once we have defined a table mapping, we can add column mappings to it. This approach is most useful
when you want your application code to use column names different from those used in the data source.
When using a table mapping to fill a DataSet , the DataAdapter will look for any column mappings for that
table mapping and use them to map the column names. Any columns that don't have a column mapping
defined will use the data source column names for the names in the DataSet .
For example, did you ever wonder what year it was when the SQL Server sample database pubs was first
designed and what the naming limitations were that caused them to come up with such abbreviated names?
Column mappings allow loading the pubs database tables into our DataSet but with more readable column
names. If your chief DBA and chief software designer insist on conflicting naming conventions, column
mappings could make them both (and you) happy.
Let's continue with our table mapping example. In addition to mapping the table, we want to map all the

column names to comply with the demand of our chief software designer that all object names begin with a
three-letter prefix indicating to whom they belong. Our mapping code would now look like Listing 7.4.
Listing 7.4 Mapping the table and column names

daEmployees.TableMappings.Add("MappingName", _
"empDataSetTable")
With daEmployees.TableMappings("MappingName").ColumnMappings
.Add("ID", "empEmployeeID")
.Add("FirstName", "empFirstName")
.Add("LastName", "empLastName")
.Add("DepartmentID", "empDepartmentID")
.Add("Salary", "empSalary")
End With
daEmployees.Fill(dsEmployeeInfo, "MappingName")

Previously, in Chapter 6 (Listings 6.1 and 6.2), we wrote a function ReadData that filled a DataSet with
data from database table and then displayed the contents of that DataSet in the listbox on frmDataSets. If
we have btnDataAdapterFill_Click call a modified version of that function named ReadDataMapped,
which contains the code shown in Listing 7.5, we can run the DataSetCode project and see the results of the
table and column mappings. These results are shown in Figure 7.1.
Figure 7.1. Displaying the contents of the dsEmployeeInfo DataSet obtained with table and
column mappings

Listing 7.5 ReadData modified to use table and column mappings

Private Sub ReadDataMapped()


Dim daEmployees As SqlDataAdapter = New _
SqlDataAdapter("select * from tblEmployee", _
"server=localhost;uid=sa;database=novelty")
dsEmployeeInfo = New DataSet()
'Configure Table and Column mappings
daEmployees.TableMappings.Add("MappingName", "empDataSetTable")
With daEmployees.TableMappings("MappingName").ColumnMappings
.Add("ID", "empEmployeeID")
.Add("FirstName", "empFirstName")
.Add("LastName", "empLastName")
.Add("DepartmentID", "empDepartmentID")
.Add("Salary", "empSalary")
End With
daEmployees.Fill(dsEmployeeInfo, "MappingName")
DisplayDataSet(dsEmployeeInfo)
End Sub

Note
The default table mapping is named Table. This mapping is used if only the DataSet name is
specified in the call to the Fill (or Update) method. That's why by default a table filled from
such a call to Fill will be named Table in the DataSet . However, you can explicitly define a
mapping named Table and specify the table name that you want. Therefore the following lines of
code

daEmployees.TableMappings.Add("Table", "MyTableName")
daEmployees.Fill(dsEmployeeInfo)

will result in the creation and filling of a table named MyTableName.

[ Team LiB ]

[ Team LiB ]

DataViews
The DataView object allows us to create different views of the data stored in a DataTable, allowing for
multiple simultaneous views of the same data. The DataView has properties that allow customization of the
data exposed, based on:

Sort order one or more columns, either ascending or descending


Row filter expression expression specifying criteria for which rows are to be exposed, based on
column values
Row state filter expression specifying criteria for which rows are to be exposed, based on the state
of the row (see the DataViewRowState enumeration shown in Table 5.3)
Although this method may seem similar to the DataTable's Select method, there is a significant
difference. On the one hand, the DataView is a fully dynamic view of the data. In addition to column value
changes, additions and deletions of rows in the underlying table are immediately reflected in the data
exposed by the DataView. On the other hand, the Select method returns a fixed length array of row
references, which reflects changes to column values in the underlying table but doesn't reflect any changes
in membership (additions or deletions) or ordering. This dynamic aspect of the DataView makes it
particularly well suited for data-binding scenarios.

Note
Although the DataView is similar to a classical database view, it differs in several significant
ways.

It cannot be used as if it were a table.


It cannot provide a join of multiple tables.
It cannot exclude columns that exist in the underlying table
It cannot add additional columns (for example, computed columns) that do not exist in the
underlying table.

You can immediately start using views by accessing the DefaultView property of the DataTable. Suppose
that you want to have a view of the Customers table that only exposes customers whose last name begins
with 'C' and orders them by zip code. To do so, just set the two corresponding property values:

dsCustomers.Tables("Customers").DefaultView().RowFilter = _
"LastName = 'Like C* '"

dsCustomers.Tables("Customers").DefaultView().Sort = "Zip"

If, instead, you want to expose the current values of only those rows that have been modified (but not yet
saved), you would reset the RowFilter property and set the RowState property

dsCustomers.Tables("Customers").DefaultView().RowFilter = ""
dsCustomers.Tables("Customers").DefaultView().RowStateFilter = _

Note
The DataView also has the Find method to search for a single row and the FindRows method to
search for and return multiple rows. These methods use the current setting of the Sort property
of the key for their searches. If you're interested in retrieving the row or set of rows matching a
specific criterion rather than maintaining a dynamic view of the data, using the Find or FindRows
method (instead of setting the RowFilter property) returns only the rows of interest. It also
provides better performance than setting the RowFilter property. The reason is that setting the
RowFilter property causes the view's index to be rebuilt, whereas Find and FindRows use the
already existing index.

DataViewRowState.ModifiedCurrent

Additional DataViews can also be created for a table. If you wanted to define another view onto the
Customers table, you could create another view and set its properties as desired:

dvView2 = New DataView(dsCustomers.Tables("Customers"), "", _


"LastName", DataViewRowState.CurrentRows)

Of course, after creating the view, you can modify its settings:

dvView2.RowFilter = "LastName > 'F'"


dvView2.Sort = "LastName DESC"
dvView2.RowStateFilter = DataViewRowState.Current

Note
There is also a DataViewManager object. It provides a convenient centralized way of managing
the settings for the default views of all the tables in a DataSet .

In most other ways, the DataView is very much like a DataTable. Individual rows and the column values of
the view are accessed via the DataRowView object. This object also supports navigating relations that have
been defined between the DataSet tables.
The DataView has an editing model similar to that of the DataTable. Once editing has been enabled, by
setting the AllowNew, AllowEdit, or AllowDelete property to True, the corresponding editing
operation(s) may be performed. The BeginEdit, EndEdit , and CancelEdit methods of the DataRowView
control application of the changes to the underlying DataTable. EndEdit places the changes in the Current
row version of the DataRow underlying the DataRowView. These changes are then accepted (or rejected) by
the underlying DataTable when AcceptChanges (or RejectChanges) is called.
Let's now take a look at some of these concepts in action. We will add a new form to the DataSetCode
project to provide two different views of a single Customers table. For each view a DataGrid will display the
view data and a set of controls to specify the view sort and filter properties. To get this result we do the
following.

1.
2.
3.
4.
5.
6.

On the frmDataSets form, add a button beneath the DataAdapterUpdates button.


Name the new button btnDataViews and set its Text property to Data Views.
Add a form, frmDataViews, to the DataSetCode project.
In the Properties window for frmDataViews, set its Text property to Dueling DataViews.
Enlarge the size of frmDataViews.
Add a DataGrid named DataGrid1, a textbox named txtFilter1, a combobox named cboSort1, a
checkbox named chkDesc1, a combobox named cboRowState1, a button named btnApply1, and three
labels to the upper portion of the frmDataViews form.
7. Set the checkbox's Text property to Descending, and the button's Text property to Apply. Set the
DropDownStyle property for both comboboxes to DropDownList. Set the label Text properties to
Filter:, Sort by Column:, and Row State.
8. Set the CaptionText property of DataGrid1 to Default DataView.
9. Arrange the controls as shown in Figure 7.2.
Figure 7.2. Arrangement of controls on upper portion of frmDataViews

10. Select all the controls and copy them to the bottom portion of frmDataViews. Rename all the controls
(except for the labels) so that they end in 2 instead of 1, as in btnApply2.
11. Set the CaptionText property of DataGrid2 to DataView2.
The final design of the form is shown in Figure 7.3, and the code for this form is shown in Listing 7.6
Figure 7.3. Final design of frmDataViews

Listing 7.6 Code for two grids displaying different views of same data table

Imports System
Imports System.Data
Imports System.Data.SqlClient
Public Class frmDataViews
Inherits System.Windows.Forms.Form
"Windows Form Designer generated code"
Private dsCustomers As New DataSet()
Private dvView2 As DataView
Private Sub frmDataViews_Load (ByVal sender As System.Object,_
ByVal e As System.EventArgs) Handles MyBase.Load
Dim i As Integer
Dim col As DataColumn
' Initialize DataAdapter
Dim daCustomers As SqlDataAdapter = New
SqlDataAdapter ("select * from tblCustomer", _
"server=localhost;uid=sa;database=novelty")
' Fill only ONE table
daCustomers.Fill (dsCustomers, "Customers")
' create second DataView

dvView2 = New DataView (dsCustomers.Tables ("Customers"), _


"", "LastName", DataViewRowState.CurrentRows)
' Fill list of column names
For Each col In dsCustomers.Tables ("Customers").Columns
cboSort1.Items.Add (col.ColumnName)
cboSort2.Items.Add (col.ColumnName)
Next
' Fill list of DataViewRowState enumeration
Dim names As String()
names = DataViewRowState.None.GetNames (DataViewRowState.None.GetType)
For i = 0 To names.GetUpperBound (0)
cboRowState1.Items.Add (names (i))
cboRowState2.Items.Add (names (i))
Next
' set to default values
txtFilter1.Text = ""
txtFilter2.Text = ""
cboSort1.SelectedItem = "ID"
cboSort2.SelectedItem = "ID"
chkDesc1.Checked = False
chkDesc2.Checked = False
cboRowState1.SelectedItem = "CurrentRows"
cboRowState2.SelectedItem = "CurrentRows"
dsCustomers.Tables ("Customers").DefaultView.Sort = "ID"
dvView2.Sort = "ID"
' Bind grids to table
DataGrid1.DataSource = _
dsCustomers.Tables ("Customers").DefaultView
DataGrid2.DataSource = dvView2
End Sub

Private Sub btnApply1_Click (ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles btnApply1.Click
Dim sort As String
Dim rowState As DataViewRowState
' Set Filter
dsCustomers.Tables ("Customers").DefaultView.RowFilter = _
txtFilter1.Text
' Set sort
sort = cboSort1.SelectedItem
If chkDesc1.Checked Then
sort = sort & "DESC"
End If
dsCustomers.Tables ("Customers").DefaultView.Sort = sort
' Set row state

dsCustomers.Tables ("Customers").DefaultView.RowStateFilter = _
rowState.Parse (rowState.GetType, cboRowState1.SelectedItem)
End Sub

Private Sub btnApply2_Click (ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles btnApply2.Click
Dim sort As String
Dim rowState As DataViewRowState
' Set Filter
dvView2.RowFilter = txtFilter2.Text
' Set sort
sort = cboSort2.SelectedItem
If chkDesc2.Checked Then
sort = sort & "DESC"
End If
dvView2.Sort = sort
' Set row state
dvView2.RowStateFilter = _
rowState.Parse (rowState.GetType, cboRowState2.SelectedItem)
End Sub
End Class

The frmDataViews_Load initializes the various objects on the form. The DataAdapter is created and then
used to load the tblCustomer data into the DataSet 's Customers table. For the two DataViews we will
create a new one, dvView2, and use the table's default view as the other. Then dvView2 is initialized to
expose all current rows and sorted by LastName.
The two sets of comboboxes are then initialized. The cboSort controls are loaded with the list of column
names of the Customers table. The cboRowState controls are filled with the list of enumerated values for the
DataViewRowState enumeration.

Note
Visual Basic.NET no longer supports the ItemData property. That's why we use the enumeration's
GetNames method to convert from enumeration values to strings when loading the comboboxes.
Similarly, the enumeration's Parse method is used to convert from strings to enumeration values
when later assigning the selected values to the RowStateFilter property.

Default settings are then assigned to the criteria controls. Then the initial sort order for both views is set to
the ID field. Finally, each of the two views is bound to one of the DataGrids, which results in both grids
displaying all the current data.
The selected criteria settings are applied to the appropriate view when the corresponding Apply button is
clicked on. The two routines, btnApply1_Click and btnApply2_Click, are identical except that they

manipulate alternate sets of controls. The RowFilter is set from the text in the txtFilter textbox, the Sort
property is set from the column selected in the combobox (with the optional addition of the DESC
descending modifier), and the RowStateFilter is set from the value set in the combobox. Modifying the
properties of the views that are bound to the grids causes the grids automatically to display the data per the
new view specifications.
Now you can run the DataSetCode project. Click on the Data Views button, which displays the new form,
frmDataViews. Make what ever changes you like to the criteria of either grid. Be sure to click on the Apply
button to apply those changes to the associated grid. Figure 7.4 shows a sample display.
Figure 7.4. Sample display of results in the form frmDataViews

Experiment with the comboboxes to try different columns to sort by and different row states to display. Try
different filters, including compound expressions such as "ID > 10 and ID < =18" or "LastName Like 'c*'"
(don't type the double quotes in the textbox). For more information on the rules for the filter expression, see
the Visual Studio help topic DataColumn.Expression Property.
In addition, the grids automatically support adding, modifying, and deleting rows, so try editing some rows
and then select an appropriate row state (for example, Added, ModifiedCurrent, or Deleted) to display only
those modified rows.

Note
Be sure to pay attention to how the two views displayed in the grids are actually displaying the
same base table. If you add, modify, or delete a row in one view (and accept the change by

moving to another row), the change automatically appears in the other view (unless, of course,
that row is filtered out). Very cool!

Business Case 7.1: Viewing Data Combined from Different Sources


When Jones Novelties, Incorporated, set out to build its new data processing system, it already had bits and
pieces of it sitting in different forms. For example, CEO Brad Jones had been maintaining order information
in an Access database. Although he realized the benefits of developing a new system with SQL Server as the
database, he wanted to be sure that nothing would get lost during the transition. He felt that it was
important do make changes gradually, especially when it came to safeguarding all the company's historical
order data.
What Jones wants to do is to develop the new system on SQL Server but to keep using the data stored in his
Access database until the end of the transition. Thus some data would be stored in one database and other
data would be stored in a second database. The two databases would need to share and join data even
though they wouldn't be of the same database type!
Fortunately, this requirement doesn't pose a problem for ADO.NET. As we've shown in this chapter, the
DataSet object doesn't care, or even know, where the data in its tables came from. That's true even if it
comes from different sources. Jones's database developer can develop the application today by loading the
tblOrder table from an Access database and the tblCustomer table from a SQL Server database. In the
future, when she is ready to have the tblCustomer table reside in the SQL Server database as well, she just
changes the connection string used by the DataAdapter to load tblCustomer, and everything else continues
to function as before.
We will now build a form that shows how Jones's database developer can achieve that result. We do so as
follows.

1.
2.
3.
4.
5.
6.

Launch Visual Studio.NET.


Create a new Visual Basic Windows Application project.
Name the project BusinessCase71.
Specify a path for saving the project files.
Enlarge the size of Form1.
In the Properties window for Form1, set its Name property to frmShowOrders and its Text property to
Show Orders.
7. Add a textbox named txtCustomerID, a button named btnFind, a listbox named lst-Customer, and a
DataGrid named grdOrders. Set the button's Text property to Find.
8. Arrange the controls as shown in Figure 7.5.
Figure 7.5. Arrangement of the controls on frmShowOrders

9. From the Toolbox, add a DataSet component as an Untyped DataSet and set its name to
dsCustOrders.
10. For tblCustomer, which resides in the SQL Server Novelty database, add a SqlDataAdapter. When
the Configuration Wizard begins, choose the connection to the Novelty database being used throughout
this chapter. Select Use SQL Statements for the Query Type.
11. Use Select * from tblCustomer as the SQL statement to load data into the DataSet .
12. When finished with the Configuration Wizard, change the name of the SqlDataAdapter to daCustomers.
The Configuration Wizard has also placed a (properly configured) Sql-Connection component in the
component tray.
13. For tblOrder, which resides in the Novelty.MDB Access database, add an OledbDataAdapter. When
the Configuration Wizard begins, add a connection. When the New Connection button is clicked on and
the Data Link tabbed dialog is displayed, click on the Provider tab and select the Microsoft Jet 4.0 OLE
DB Provider.
14. Click on the Connection tab and enter or browse to the Novelty.MDB database file.
15. Select Use SQL Statements for the Query Type.
16. Use Select * from tblOrder as the SQL statement to load data into the DataSet .
17. When finished with the Configuration Wizard, change the name of the OledbDataAdapter to
daOrders. The Configuration Wizard has also placed a (properly configured) OledbConnection
component in your component tray.
The first piece of code to be added at the top of the file is

Imports
Imports
Imports
Imports

System
System.Data
System.Data.SqlClient
System.Data.Oledb

Then, within the body of the class definition for frmOrders, we add the code shown in Listing 7.7.

Listing 7.7 Code to join data from two different types of data sources

Private dvOrders As New DataView()


Private Sub frmShowOrders_Load (ByVal sender As system.Object, _
ByVal e As System.EventArgs) Handles MyBase.Load
Dim rel As DataRelation
' Fill the DataSet table with data from database
daCustomers.Fill (dsCustOrders, "Customers")
daOrders.Fill (dsCustOrders, "Orders")
' Create relation between the tables
rel = dsCustOrders.Relations.Add ("relCustOrders", _
dsCustOrders.Tables ("Customers").Columns ("ID"), _
dsCustOrders.Tables ("Orders").Columns ("CustomerID"))
' Set Primary Key on Customers table
Dim pk (0) As DataColumn
pk (0) = dsCustOrders.Tables ("Customers").Columns ("ID")
dsCustOrders.Tables ("Customers").PrimaryKey = pk
' Set Default Sort to allow Find method to work
dsCustOrders.Tables ("Customers").DefaultView.Sort = "ID"
End Sub

Private Sub btnFind_Click(ByVal sender As System.Object,_


ByVal e As System.EventArgs) Handles btnFind.Click
Dim RowNum As Integer
Dim dvRow As DataRowView
Dim i As Integer
If IsNumeric (txtCustID.Text) Then
RowNum = dsCustOrders.Tables ("Customers"). _
DefaultView.Find (txtCustID.Text)
If RowNum <> -1 Then
dvRow = dsCustOrders.Tables ("Customers"). _
DefaultView (RowNum)
' Fill Listbox with Customer data fields
lstCustomer.Items.Clear()
For i = 0 To sCustOrders.Tables ( _
"Customers").Columns.Count - 1
lstCustomer.Items.Add (dvRow.Item(i))
Next
grdOrders.CaptionText = _
"Orders for customer#" & txtCustID.Text
' Get related Child rows of selected Customer
dvOrders = dvRow.CreateChildView ("relCustOrders")
grdOrders.DataSource = dvOrders

Else
MessageBox.Show( _
"CustomerID not foundPlease try again.")
txtCustID.Clear()
End If
Else
Beep()
End If
End Sub

We set up everything in the frmShowOrders_Load routine. We then fill the two DataSet tables and create
the DataRelation that joins them. Finally, we set the PrimaryKey and Sort properties on the Customers
table and DefaultView so that we will be able to use the view's Find method, as we demonstrate shortly.
The interesting stuff happens in response to clicking on the Find button, which is implemented in the
btnFind_Click routine. After verifying that the value in the txtCustID textbox is indeed numeric, we search
for this value via the Customers DefaultView. If it is found, each of the columns of that Customer
DataRowView is displayed in the listbox. We then create a new view of the child rows of this selected
DataRowView and bind the view to the DataGrid.
Be sure that you appreciate and are appropriately impressed by what we have just done. We have created
and navigated a relation that joins two tables from two different types of databases!
You can now run the BusinessCase7 project and see for yourself how the customer data is displayed in the
listbox and the orders for that customer are displayed in the grid. Figure 7.6 shows a sample display.
Figure 7.6. Sample display of results in the form frmShowOrders

Note
Don't be alarmed or confused by the fact that the Orders grid in Figure 7.6 has a column named
OrderAmount rather than Amount, as we defined it in our SQL Server table. Remember, although
the customer data is coming from the SQL Server database, the orders data is coming from a
different, "legacy" MDB database. It is not uncommon to see the names of database objects
change from one version of an application to the next.
If you're really bothered by OrderAmount, you can rectify it by using the AS clause to change the
column name in the data returned by the Select statement for the daOrders DataAdapter, as
follows:

SELECT CustomerID, ID, OrderAmount AS Amount, OrderDate


FROM tblOrder

The need for data to be easily combined from several different data sources will become more prevalent as
time goes on. Companies will deal with more trading partners, companies will merge, and data will come in
different formats. For example, XML is becoming an increasing popular and easy way to transfer data such as
orders and products lists between companies. To the DataSet , XML is just another way to fill its tables with
data. In Chapter 10 we show how to use the DataSet's ability to load and save data as XML.

[ Team LiB ]

[ Team LiB ]

Strongly Typed DataSets


Until now, we have been discussing DataSets in their untyped form. However, with ADO.NET and Visual
Studio, we can also generate typed DataSet s. A typed DataSet is derived from the untyped DataSet class
but adds objects, methods, properties, and events that are specific to our database schema. The schema is
defined in an XML Schema file (.xsd file), and a design-time tool is provided to generate a set of classes
based on this schema.
Because the typed DataSet is derived from the DataSet , it inherits all its functionality, properties,
methods, and events, and it can be used anywhere an untyped DataSet can be used. However, as its
elements are strongly typed, it offers additional features that provide increased development speed and
reliability.

DataSets, DataTables, and DataRows are objects specific to the schema being handled.
DataColumns and DataRelations are exposed as specific named properties, rather than generic
collection elements.
Compile-time-type checking is made possible.
IntelliSense statement completion is provided in the Visual Studio code editor.
Code is more concise and readable overall.
For example, using our Customers table, setting a column value for a row would look like

dsCustomers.Tables("Customers").Rows(row)("FirstName") = NewValue

where row is the index into the Rows collection of the Customers table, FirstName is the name of the column
being accessed, and NewValue is the new value being assigned. Several points of potential errors will only be
reported at run-time rather than at design time. Is the variable NewValue of the correct type to match the
type of the column being assigned? Does the table Customers exist? Does the column FirstName exist? Was
the table or column name accidentally misspelled? The same questions apply to reading a column value.
However, if we generate a typed dsCustomers DataSet , the DataSet has properties specific to our schema
and can already do all the required type checking at design time. The corresponding code for a typed
DataSet containing the Customers table would be

dsCustomers.Customers(row).FirstName = NewValue

Note how the Customers table is a specific property of the typed DataSet and the FirstName column is a
specific property of the typed DataTable Customers. We also show shortly how the IntelliSense in the code
editor utilizes these properties.

1.

Let's return to the Departments table to see how this approach works. First, we do the following.

1. Open the DataSetCode project in Visual Studio.


2. On the frmDataSets form, add a button beneath the Data Views button.
3. Name the new button btnTypedDataSet and set its Text property to Typed DataSet.
We now need to add the desired schema to our project. If we already had an .xsd file, we could use that. As
we don't have one, we'll create one. We can do so easily by using the components from the Data tab of the
toolbox, as follows.

1. Add a SqlDataAdapter component to the form. For the Configuration Wizard, use the connection to
the Novelty (SQL Server) database, use "select * from tblDepartment" for the selection string, and use
the default settings for the other options. Doing so adds a new DataAdapter and a new Connection
to the form. Change the name of the DataAdapter to daDepartments.
2. Display the Generate DataSet dialog box, by either selecting it from the main Data menu r by selecting
it from the pop-up menu displayed by right-clicking on the form. Choose a New dataset, name it
DepartmentsDS, leave the Add this dataset to the designer checkbox checked, and click on the OK
button.
3. Change the name of the DataSet component just added to dsDepartments.
The Dataset Generator also added the XML schema file, DepartmentsDS.xsd, to the project in the Solution
Explorer. In addition, the file that will implement the custom typed DataSet classes, DepartmentsDS.vb,
was added to the project beneath the schema file. The DepartmentsDS.vb file isn't visible unless we click on
the Show All Files button on the top of the Solution Explorer.

Note
The DataSet Generator nd Schema Editor handle schemas with related tables as easily as they
handle single tables.

1. Double-click on the DepartmentsDS.xsd file in the Solution Explorer to display the XML Schema editor.
We are going to make a simple name change to the element in the displayed schema.
2. Change the element name from tblDepartment to Departments, by editing the uppermost left cell in
the element. The results are shown in Figure 7.7.
Figure 7.7. The DepartmentsDS schema in the Schema Editor

3. Right-click on the Schema Editor design surface to verify that the Generate DataSet option is selected.
This is the default setting, but if for some reason it isn't selected, select it now.
4. Save and close the Schema Editor. The code in the DepartmentsDS.vb file is automatically generated.

Note
If you're curious, go ahead and open the DepartmentsDS.vb file in the code editor to see what the
generated code looks like. The important thing to remember is thatfor a given table name
(which we changed from tblDepartment to Departments)the DataSet Generator generates three
object classes,

DepartmentsDataTable
DepartmentsDataRow
DepartmentsRowChangeEvent

in addition to the type DataSet class itself, which we specified as DepartmentsDS.

We can now go ahead and add the code to use the typed DepartmentsDS DataSet, as shown in Listing
7.8.
Listing 7.8 Code to display the contents of the typed DataSet DepartmentsDS

Private Sub btnTypedDataSet_Click(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles btnTypedDataSet.Click
daDepartments.Fill(dsDepartments, "Departments")
DisplayDepartments(dsDepartments)
End Sub

Private Sub DisplayDepartments(ByVal ds As DepartmentsDS)


Me.lstOutput.Items.Clear()
Me.lstOutput.Items.Add("DISPLAY TYPED DATASET")
Me.lstOutput.Items.Add("=====================")

'Each Column is now a property of the DepartmentsDS DataSet


Dim row As DepartmentsDS.DepartmentsRow
For Each row In ds.Departments.Rows
Me.lstOutput.Items.Add( _
ds.Departments.IDColumn.ColumnName _
& " : " & row.ID.ToString)
Me.lstOutput.Items.Add( _
ds.Departments.DepartmentNameColumn.ColumnName _
& " : " & row.DepartmentName)
Next
End Sub

In the Click handler for btnTypedDataSet, we call the Fill method of the daDepartments
DataAdapter to load the Department table with data. We then call the DisplayDepartments routine to
display the contents of the table. To display the table contents, we simply loop across ll the rows in the table
and display the column name and value for each column. This approach isn't very different from what we've
done before, but note the following.

The variable row is declared as a specific type of row, DepartmentsDS.DepartmentsRow, rather than a
generic DataRow .
The Departments table is accessed as a property of the DataSet , ds.Departments, rather than as an
item in the Tables collectionthat is, ds.Tables("Departments").
The columns of the table are accessed as properties of the table, ds.Departments.IDColumn and
ds.Departments.DepartmentNameColumn, rather than as items in the Columns collectionthat is,
ds.Tables("Departments").Columns("ID") and ds.Tables("Departments").Columns("DepartmentName").
The column values are accessed as properties of the row (DepartmentsRow), row.ID and
row.DepartmentName, rather than as items in Row's Items collectionthat is, row("ID") and
row("DepartmentName").
Figure 7.8 shows how typed properties (for example, DepartmentName) appear in the IntelliSense pop-up
menu, which aids and accelerates the coding process.

Figure 7.8. IntelliSense menu, showing properties of a typed DataSet

[ Team LiB ]

[ Team LiB ]

Summary
In this chapter we discussed several important real-world topics for developing ADO.NET database
applications. Concurrency detection is an important multiuser design considerations, especially when you're
working in a disconnected mode with the DataSet . Table and column mappings are a convenience that adds
to the readability and maintainability of application code. Using DataView objects offers multiple views of
the same data, each with its own filter and sort settings. Finally, the strongly typed DataSet provides the
option of advanced functionality and readability, when used in appropriate situations.

Questions and Answers

Q1:

Now that I understand how to detect a concurrency violation, how do I resolve it?

A1:

The answer to this question is application specific. You may decide that, if a concurrency conflict
is detected, the user must requery the database and start over. You may decide to let the
second user's changes override the first user's changes but write to a log (or send a message) of
the occurrence. A third option is to let the user decide, by presenting her with all three sets of
data: the original values, the values currently in the database, and the values that she is trying
to set in the database. Finally, you could develop an algorithm or some logical code to let the
application decide on its own which change to keep. For example, perhaps the priority or role of
the user, such as supervisor versus clerk, should be the determining factor. ADO.NET and Visual
Studio.NET provide the tools for detecting the concurrency violationyou still need to make the
hard decisions as to how to resolve it!

Q2:

Should I always use typed DataSets? If not, what do I need to consider when
deciding?

A2:

As is usually the case, the answer is "It depends." We've shown the advantages of the typed
DataSet , but there are still times when using an untyped DataSet is preferable. That's mainly
the case when the schemas being loaded into the DataSet are unknown ahead of time or are
subject to change. A classic example is queries that are being generated on the fly or are being
specified by the user in some type of query builder. Or maybe you want the schema to be
specified by the XML file that you're reading. In these cases, you can't provide a schema to
generate the typed DataSet during design time.
Another consideration or trade-off is whether writing generic procedures is a key requirement of
your development project. The benefits of typed DataSets come at the price of nongeneric
code. One consequence of using a nongeneric typed DataSet is that any change to the database
schema requires a regeneration and recompilation of the typed DataSet objects.
Finally, the standard, untyped DataSet object is so useful and easy to use, you may often want
to use it to manage data that isn't schema based. An untyped DataSet could be a great
alternative to implementing custom data structures with custom search, sort, and filter
algorithms. The XML capabilities of the DataSet are an additional free bonus.

[ Team LiB ]

[ Team LiB ]

Chapter 8. Visual Studio.NET Database Projects


IN THIS CHAPTER

Creating a Database Project


Database References
Scripts
Queries
The database project is a special type of Visual Studio.NET project. Its purpose is to create and manage SQL
database scripts. If you're developing database applications with Visual Studio.NET, you will want to know
about the tools available for making your work with databases easier and faster. This version of Visual Studio
features many new tools and contains significant improvements to others that existed in previous versions.
The Visual Database tools allow us to view, design, modify, and test database objects (for example, tables,
views, queries, stored procedures, and so on) quickly without having to jump from the Visual Studio
environment to a different toolset. The main advantage is in design and development productivity, although
licensing and installation issues also have been greatly simplified.
We have already worked with some of these tools in previous chapters, and their use is usually quite
intuitive. They should also seem very familiar to you if you've had experience using similar tools (even MS
Access qualifies). If this is your first time working with such tools, the Visual Studio help topics will be useful.
In this chapter we continue using some of the Visual Database tools to demonstrate how to utilize the tools
and features available in a VS.NET database project.

[ Team LiB ]

[ Team LiB ]

Creating a Database Project


A special type of VS.NET project of particular interest is the database project. A database project is not
specific to any .NET programming language, such as Visual Basic or C#, but is meant to be used to design,
test, run, and manage SQL scripts and queries. When you design your application in multiple layers, as you
should, this project type helps you to develop and manage the database layer closest to the database and
the database itself.
In some applications, a database project may be part of the actual application code, and in other
applications, it may be part of a separate administration solution used to set up and maintain the
application's database.
To add a new database project to a solution, do the following.

1. Launch the Add New Project dialog from either the main File menu or from the context menu displayed
by right-clicking on the solution in the Solution Explorer.
2. The left panel of this dialog box displays a list of folders containing different project types. Expand the
Other Projects folder (click on the "+").
3. Select the Database Projects folder. It displays a Database Project template in the right panel of the
dialog box.
4. Specify a project name of NoveltyData and a path for saving the project files and then click on the OK
button.
5. If you don't currently have any database connections defined in the Server Explorer, the Data Link
Properties dialog box will be displayed, allowing you to define a new database connection.
6. If you have at least one database connection defined in the Server Explorer, the Add Database
Reference dialog box is displayed. From this dialog choose the database connection that you want to
use (you can add new ones later). Alternatively, you can click on the Add New Reference button to
display the Data Link Properties dialog box, allowing you to define a new database connection.
7. Select the connection to the Novelty database on the SQL Server and click on the OK button. If for
some reason the connection doesn't exist, create it from the Data Link Properties dialog box.
8. Figure 8.1 shows the project and its folders in the Solution Explorer.
Figure 8.1. The NoveltyData database project shown in the Solution Explorer

Note that the created database project contains the following folders:

Change Scripts
Create Scripts
Queries
Database References
Let's take at look at these folders and what they contain. We start with Database References because they're
the prerequisite for everything else.

[ Team LiB ]

[ Team LiB ]

Database References
A database reference is a pointer to a database. However, this reference doesn't allow you to access and
view the database objectsthat's the job of the database connection.
A database reference is stored as part of the database project that is saved to disk. Whenever the project is
opened, the project scans the list of database connections currently defined in the Server Explorer to see if a
connection to the referenced database already exists. If such a connection doesn't exist, the database
project automatically creates one.
The first database reference added to a project becomes the default reference for the project. That is, unless
specified otherwise, the scripts and queries that will be designed and/or run will be against that database
reference.

Note
The icon for a database reference is a shortcut icon on top of a database icon. For the default
database reference, the shortcut icon is red, green, and white. For the other references, the
shortcut icon is black and white.
You can change the default reference for the project by right-clicking on the reference that you
would like to be the default and selecting Set as Project Default from the context menu displayed.

You can add database references by right-clicking on the Database References node and selecting New
Database Reference from the context menu displayed.

Tip
You can add another database reference and make it the default all at once. Right-click on the
database project name (main node) in the Solution Explorer and then select the Set Default
Reference menu item from the displayed context menu. Doing so displays the Set Default
Reference dialog box. If you choose a database connection from the list of connections, it is added
to the project and set to be the default database reference.

[ Team LiB ]

[ Team LiB ]

Scripts
The database project template automatically creates two folders for holding SQL scripts. The Create Scripts
folder is meant to hold scripts that are used to re-create the database from scratch, or to re-create a portion
of the database that has undergone changes.
The Change Scripts folder is meant to contain SQL scripts that reflect desired changes that you haven't yet
made to the database. Changes may be "placed on hold" because, as a developer, you don't have sufficient
security privileges to modify a production database (sometimes not a bad idea!) or because changes from
multiple sources are to be combined and applied all at once as a unit.

Note
The folder names given are by convention only. You can change them to something different that
you prefer or find more meaningful. Of course, being able to change them also means that you
have the flexibility to mess things up, or at least confuse others (or even yourself) who need to
use your project.
You can also add other folders to a database project. For example, you may want a separate
folder to store scripts required for upgrading the database to a specific release version or for other
maintenance activities. You may want to have several different query folders, each containing
different types of queries. You can add a new folder to the project by right-clicking on the project
and selecting the New Folder menu item from the context menu that is displayed.

You can create any SQL script manually, by right-clicking on a folder (or the project) node in the Solution
Explorer and then selecting the Add New Item or the Add SQL Script menu item from the displayed context
menu. After selecting one of the standard script templates, shown in Figure 8.2, you can edit it manually in
the Visual Studio editor.
Figure 8.2. The standard script templates displayed in the Add New Item dialog box

What is special about the Create and Change scripts is that you can have them generated automatically.

Create Scripts
As we said previously, Create Scripts are SQL scripts that create new database objects, including tables,
views, stored procedures, and constraints. They are normally used to set up an installation or revert an
existing site (such as a development server) to its pristine state.

Note
You can generate Create Scripts only if you are using SQL Server 7.0 or SQL Server 2000.
Moreover, to use this feature, the client tools for SQL Server must be installed on the same
machine as Visual Studio. The reason is that Visual Studio utilizes the same tools and dialogs.

To generate Create Scripts, do the following.

1. Open the Server Explorer and right-click on the item for which you want to generate the script. This
item can be either the entire database or an individual database object (table, view, stored procedure,
or function). For this example, select the entire Novelty database.
2. Select the Generate Create Script menu item from the context menu displayed. After you successfully
enter your security credentials in the SQL Server Login dialog box, the Generate Create Scripts dialog
3.

2.
box is displayed.
3. If you select a specific database object, the Generate Create Scripts dialog box appears, configured for
only that object. If you select the entire database, as in Figure 8.3, the dialog box appears, showing all
the objects available on the database but without any of them selected for scripting. This result is
shown in Figure 8.3.
Figure 8.3. The Generate Create Scripts dialog box for the entire Novelty database

Tip
You can also display the Generate Create Scripts dialog box by dragging and dropping from the
Server Explorer to a folder in the Solution Explorer. Here, too, you can drag either the entire
database or individual database objects. If you select one or more individual objects, such as a
table and its relevant stored procedures, the Generate Create Scripts dialog appears configured
for just the selected object(s). However, if you select the entire database, or even a single
folder within the database, such as the Tables or Views folder, it in fact behaves as if you
dragged the entire database. If you want to script more than just a single object, you can also
start by right-clicking on the Create Scripts folder and selecting the Generate Create Script
menu item from there.
4. On the General tab, you can specify which objects are to be included in the generated script by
selecting one or more objects from the Objects on the Novelty panel and adding them to the Objects to
be scripted panel by either double-clicking on an object or the Add button. Entire groups of objects,
such as all tables or all views (or even all the database objects), can be specified by checking one or
more of the checkboxes in the top portion of the tab. For this example, check the Script all objects
5.

checkbox.
5. Select the Formatting tab. In addition to the default settings, check the Include descriptive headers in
the script files checkbox.
6. Click on the OK button. Doing so displays the Browse for Folder dialog box so that you can specify
where to save the script file(s). Note that it defaults to the Create Scripts directory of the current
project, but you can change this setting to whatever you want. Accept the default by clicking on the OK
button.
7. Because we had you choose to script all the database objects, many script files are created and added
to the project, as shown in Figure 8.4. We could have had you choose to have all the scripts in a single
file by selecting the Create one file option on the Options tab of the Generate Create Scripts dialog
box.
Figure 8.4. The Solution Explorer filled with Create Scripts for the Novelty database

Tip
You can even generate the script for creating the database itself by checking the Script
database option on the Options tab of the Generate Create Scripts dialog box. Our database
project, NoveltyData, now contains a set of scripts that we can run to create all the database
objects for the Novelty database. In the Running the Scripts section, we show how to do so.
8. You can view (and modify) the contents of a script file by double-clicking on it in the Solution Explorer.

8.
Figure 8.5 shows the script dbo.tblOrder.tab that creates the tblOrder table in the Novelty database.
Figure 8.5. The generated script to create the tblOrder table.

Note, however, that these scripts create only the database schema and do not populate the newly created
tables with any data. In the Command Files section, we show how to copy a table's data.

Change Scripts
Change scripts are used to apply changes to an existing database schema. Although these scripts could be
written manually, it is preferable to use a tool to generate them. When we use the Visual Database tools that
we used in Chapters 1 and 2, Visual Studio automatically maintains a script of any changes made to the
database schema that haven't yet been applied to the database.
Let's say that you want to add a new field, StartDate, to the tblEmployee table to track the start date of each
employee. Do the following.

1. Open the Server Explorer, expand the Tables node of the Novelty database, and right-click on
tblEmployee.
2. Select the Design Table menu item from the context menu to open tblEmployee in the Table Designer.
3. Add a new column named StartDate, of type datetime, as shown in Figure 8.6.
Figure 8.6. Adding the StartDate column to the tblEmployee table

Because you will want to apply this change to all the databases already installed and deployed at different
sites in the field, you need to create a change script for what you just did.

1. Select the Generate Change Script menu item from the main Diagram menu or click on the Generate
Change Script button on the Table toolbar. Doing so displays the Save Change Script dialog box, with a
preview of the script, as shown in Figure 8.7.
Figure 8.7. The Save Change Script dialog box, showing the script to add the StartDate
column to tblEmployee

2. Click on the Yes button. The standard Save As dialog is displayed and defaults to the Change Scripts
folder of the current database project.
3. Click on the Save button to save the script with the default name of tblEmployee.
4. Close the table designer where you modified tblEmployee. When prompted to save changes to the
table, click on the No button. In the next section we show you how to apply the changes by running the
SQL script that you just created.
5. Double-click on tblEmployee.sql in the Solution Explorer to view the saved script, as shown in Figure
8.8.
Figure 8.8. Viewing the tblEmployee.sql script.

Running the Scripts


You can run a script directly within the Solution Explorer. The easiest way to do so is to drag the script that
you want to run and drop it on the database reference that you want to run it against. Alternatively, you can
right-click on the script that you want to run. The context menu displayed has both a Run and a Run On
menu items. Selecting the Run item executes the script against the default database reference. Selecting the
Run On menu item, as shown in Figure 8.9, allows you to specify a database other than the current default.
Note that you can choose from existing database references or use this opportunity to add a new one.
Figure 8.9. The Run On dialog box

Note
You can also choose to define a temporary database reference to run this script on. Doubleclicking on the last item in the list, <temporary reference>, displays the familiar Data Link
Properties dialog for you to use to define the connection. However, this reference won't be added
to the project or to the Server Explorer.

To apply the changes that you previously designed and saved in the script tblEmployee.sql, do the following.

1. Verify that the tblEmployee table does not contain the StartDate field. In the Server Explorer, expand
the node for tblEmployee to list the fields of the table, as shown in Figure 8.10.
Figure 8.10. Displaying the fields of tblEmployee

2. Expand the Change Scripts folder in the Solution Explorer and select the tblEmployee.sql script.
3. Drag and drop the script onto the reference for the Novelty database in the Solution Explorer.
4. The Execute Scripts or Queries dialog box is shown, giving you a chance to confirm that you want to
run the script on the specified database. Click on the Yes button to continue and to run the script.
5. Repeat step 1 to again display the fields of the table (or just click on the Refresh button on Server
Explorer toolbar if it remained open). The newly added StartDate field now appears.

Command Files
Now that you have created all these scripts to create and modify the different database objects, wouldn't it
be nice if you could organize multiple scripts into a logical group to be run as a single unit? Yes it would be,
and VS.NET can create command files to do just that. These command files, which have the .cmd extension,
are meant to be used on the Windows 2000 or Windows XP operating systems, which recognize and can
execute such files. These files can also load a newly created table with data that we exported from an
existing database.

Note

The ability to easily and automatically create a script that loads table data in addition to creating
database schema and objects is a VS.NET feature not found in the SQL Server Enterprise
Manager.

Let's say that we want to create a single command file that will automatically run all the Create Scripts that
we need to create a brand new version of our Novelty database on another computer. Although this new
system will have its own customers, employees, and orders, the inventory information in tblInventory will be
the same. We therefore want to populate the new database's tblInventory table with the data currently in
our existing tblInventory table.
Because you will want to have the command file load the inventory data from the existing database to the
newly created one, you must first export the data and then continue with the process of creating the
command file, as follows.

1. In the Server Explorer, right-click on tblInventory and select the Export Data menu item from the
context menu displayed.
2. The Browse for Folder dialog box appears and defaults to the Create Scripts folder of the database
project. Click on the OK button to accept this folder.
3. After proceeding through the SQL Server Login dialog, the script dbo.tblInventory.dat is created.
4. Decide which folder in the database project to use to store the new command file. Again use the Create
Scripts folder.
5. In the Solution Explorer, right-click on the Create Scripts folder and select the Create Command File
menu item from the pop-up menu. Doing so displays the Create Command File dialog box shown in
Figure 8.11.
Figure 8.11. The Create Command File dialog box for the Novelty database.

6. The Available Scripts list of the Create Command File dialog box contains all the SQL scripts in the
selected folder that can be included in the command file. You can add all the scripts, or just individual
ones, to the list of Scripts to be added to the command file. Click on the Add All button to add all the
Create Scripts to the command file.
7. Because at least one Create Table Script (with the .tab extension) was included in the list of scripts to
be added to the command fileand there is at least one exported data file in the folderthe Add Data
button is enabled on the Create Command File dialog box.
8. Click on the Add Data button to display the Add Data dialog box shown in Figure 8.12. This dialog lists
all the Create Table Scripts that were selected to be added to the command file and allows choosing
the corresponding data file for each script.
Figure 8.12. The Add Data dialog box

9. The dialog recognizes and automatically matches the tblInventory data file with the script that creates
the tblInventory table. Click on OK to return to the Create Command File dialog box.
10. Now that the scripts and the exported data files have been specified, click on the OK button to
generate the command file. The Create Scripts.cmd command file is added to the Create Scripts folder
and its contents are displayed, as shown in Listing 8.1.
Listing 8.1 The contents of the Create Scripts.cmd command file

@echo off
REM: Command File Created by Microsoft Visual Database Tools
REM: Date Generated: 08-Feb-02
REM: Authentication type: Windows NT
REM: Usage: CommandFilename [Server] [Database]
if '%1' == '' goto usage
if '%2' == '' goto usage
if
if
if
if

'%1'
'%1'
'%1'
'%1'

==
==
==
==

'/?' goto usage


'-?' goto usage
'?' goto usage
'/help' goto usage

osql -S %1 -d %2 -E -b -i "dbo.tblCustomer.tab"
if %ERRORLEVEL% NEQ 0 goto errors
osql -S %1 -d %2 -E -b -i "dbo.tblDepartment.tab"
if %ERRORLEVEL% NEQ 0 goto errors
osql -S %1 -d %2 -E -b -i "dbo.tblEmployee.tab"
if %ERRORLEVEL% NEQ 0 goto errors
osql -S %1 -d %2 -E -b -i "dbo.tblInventory.tab"
if %ERRORLEVEL% NEQ 0 goto errors
bcp "%2.dbo.tblInventory" in "dbo.tblInventory.dat" -S %1 -T -k -n -q
if %ERRORLEVEL% NEQ 0 goto errors
osql -S %1 -d %2 -E -b -i "dbo.tblOrder.tab"
if %ERRORLEVEL% NEQ 0 goto errors
osql -S %1 -d %2 -E -b -i "dbo.tblOrderItem.tab"
if %ERRORLEVEL% NEQ 0 goto errors

osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ

-b -i "dbo.tblCustomer.kci"
0 goto errors
-b -i "dbo.tblDepartment.kci"
0 goto errors
-b -i "dbo.tblEmployee.kci"
0 goto errors
-b -i "dbo.tblInventory.kci"
0 goto errors
-b -i "dbo.tblOrder.kci"
0 goto errors
-b -i "dbo.tblOrderItem.kci"
0 goto errors
-b -i "dbo.tblCustomer.fky"
0 goto errors
-b -i "dbo.tblDepartment.fky"
0 goto errors
-b -i "dbo.tblEmployee.fky"
0 goto errors
-b -i "dbo.tblInventory.fky"
0 goto errors
-b -i "dbo.tblOrder.fky"
0 goto errors
-b -i "dbo.tblOrderItem.fky"
0 goto errors
-b -i "dbo.tblCustomer.ext"
0 goto errors
-b -i "dbo.tblDepartment.ext"
0 goto errors
-b -i "dbo.tblEmployee.ext"
0 goto errors
-b -i "dbo.tblInventory.ext"
0 goto errors
-b -i "dbo.tblOrder.ext"
0 goto errors
-b -i "dbo.tblOrderItem.ext"
0 goto errors
-b -i "dbo.Employee_view.viw"
0 goto errors
-b -i "dbo.EmployeeDepartment_view.viw"
0 goto errors
-b -i "dbo.DeleteEmployee.prc"
0 goto errors
-b -i "dbo.GetCustomerFromID.prc"
0 goto errors
-b -i "dbo.InsertEmployee.prc"
0 goto errors
-b -i "dbo.InsertEmployeeOrg.prc"
0 goto errors
-b -i "dbo.LastNameLookup.prc"
0 goto errors
-b -i "dbo.procEmployeesSorted.prc"
0 goto errors

osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ
osql -S %1 -d %2 -E
if %ERRORLEVEL% NEQ

-b -i "dbo.SelectEmployees.prc"
0 goto errors
-b -i "dbo.UpdateEmployee.prc"
0 goto errors

goto finish
REM: How to use screen
:usage
echo.
echo Usage: MyScript Server Database
echo Server: the name of the target SQL Server
echo Database: the name of the target database
echo.
echo Example: MyScript.cmd MainServer MainDatabase
echo.
echo.
goto done
REM: error handler
:errors
echo.
echo WARNING! Error(s) were detected!
echo
echo Please evaluate the situation and, if needed,
echo restart this command file. You may need to
echo supply command parameters when executing
echo this command file.
echo.
pause
goto done
REM: finished execution
:finish
echo.
echo Script execution is complete!
:done
@echo on

Note
The command file makes use of the osql and bcp command line utilities that are part of the SQL
Server installation. The osql utility allows you to execute SQL statements, system procedures, and
script files. The bcp is a bulk copy program that copies data to and from a data file and an
instance of SQL Server.

You can run this command file from within the Solution Explorer by right-clicking on it and then selecting the

Run menu item. You can also invoke it externally, independent of Visual Studio, so long as all the scripts
exist together with the command file.

Tip
Remember that running this command file against a database will delete all the data that currently
exists in that database!

[ Team LiB ]

[ Team LiB ]

Queries
Similar to what we mentioned regarding the Create and Change scripts, you can also create SQL queries
manually within the Visual Studio environment. However, except for the most trivial queries, using the Query
Designer to design the queries graphically is much more efficient and less error-prone.
We can demonstrate use of the Query Designer by creating a parameter update query that updates the
wholesale prices of all of the products in our inventory by a specified percentage.

1. Open the Solution Explorer and right-click on any folder node other than Database References. From
the context menu displayed, select the Add Query menu item. Doing so displays the Add New Item
dialog box, shown previously in Figure 8.2.
2. Select the Database Query template, set the name to UpdateWholesale.dtq, and click on the Open
button. The Query Designer is now displayed, as shown in Figure 8.13.
Figure 8.13. The Query Designer opened to create a new database query

3. From the Add Table dialog, add the tblInventory table and then click on the Close button to dismiss the
dialog.
4. We need to change the query type from a Select to an Update query. We do so by selecting the
Change Type menu item from the main Query menu, or by clicking on the Change Type button on the
Query toolbar, and then selecting the Update menu item.
5. In the Diagram pane of the Query Designer, click on the checkbox for the WholesalePrice field, as that
6.

5.
is the field we're going to update.
6. In the Grid pane, enter the following formula in the New Value column in the row for the
WholesalePrice field that we just added:

WholesalePrice * (1 + ? / 100)

7. This formula accepts a parameter value, which is the percentage that the wholesale price should be
increased by. When the query is executed, the question mark in the formula will be replaced by the
parameter value that is provided. The Query Designer should look like that shown in Figure 8.14.
Figure 8.14. The query to update the WholesalePrice field in the tblInventory table

8. Although we could now run and test our query within the Query Designer, we will (soon) do it from the
Solution Explorer.
9. Close the Query Designer and click on the Yes button when prompted to save the changes to the
UpdateWholesale.dtq query.
10. Double-click on tblInventory in the Server Explorer to display all the current data in that table. These
are the values before we run the query script to update the wholesale price.

Tip
You may want to take a snapshot of the data so that you can easily verify the new prices after
running the update query. You can do so by selecting all the rows that are being displayed and
then copying and pasting them in a Notepad file (or any other tool of your liking).
11. Similar to what we showed previously with a script, we can run a query by dragging and dropping it

11.
onto a database reference. We can also run it on the default reference by right-clicking on the query
that we want to run and selecting the Open menu item.

Note

12.
13.
14.

15.
16.

The context menu displayed when you right-click on a query also contains Design and Design
On menu items that open the Query Designer against the default or a specified database
reference.
Drag and drop the UpdateWholesale query onto the reference for the Novelty database in the Solution
Explorer.
The Execute Scripts or Queries dialog box is shown, giving us a chance to confirm that we want to run
the script on the specified database. Click on the Yes button to continue and to run the query.
The Define Query Parameters dialog box is shown because our query contains a parameter. Enter a
valuesay, 10as the percentage increase for the wholesale prices. Click on the OK button to
continue.
A message box is displayed, stating the number of rows affected by the query. Click on OK to dismiss
it.
Repeat step 9 to again display the data in the tblInventory table. The table should contain the modified
wholesale prices, as shown in Figure 8.15.
Figure 8.15. The data in the tblInventory table, after increasing the WholesalePrice field by
10 percent

[ Team LiB ]

[ Team LiB ]

Summary
In this chapter we focused on the Visual Studio.NET database project type. This project type doesn't include
any Visual Basic code; rather, it is meant to be used to create, test, run, and manage SQL database scripts,
queries, and command files. These scripts and commands allow you to create new database schemas, make
changes to existing schemas, and query and update existing database data. These are important, timesaving tools that should be used as much as possible during both the development and deployment phases
of a project.
In general, you probably won't bother getting involved with database project scripts and queries unless
you're developing, maintaining, and enhancing a real-world database application. Furthermore, as an
application programmer, you may be used to leaving the tasks discussed in this chapter to database analysts
(DBAs). However, more and more application programmers are assuming many of the tasks traditionally
performed by DBAs. Even if that isn't the case in your situation, you may still need to perform many of these
tasks for your own private development environment.

Questions and Answers

Q1:

I see that many of the same or very similar tools exist in both Visual Studio and the
SQL Server Enterprise Manager. Which ones should I use?

A1:

The short answer: whichever one(s) you prefer. For many operations, the tools are the same in
either toolset, so you can go with whichever you prefer. However, some operations are easier, or
can only be done, with one tool or the other. In all likelihood, if you are a DBA, you will do most
of your work from within the SQL Server Enterprise Manager. If you are a programmer, you will
most likely want to do as much as possible within the Visual Studio environment. However, if
your database is not SQL Server, you will need other tools in order to design or modify your
database objects. The Visual Studio tools allow you to browse and query such databases but not
to modify them.

[ Team LiB ]

[ Team LiB ]

Chapter 9. XML and .NET


IN THIS CHAPTER

An Overview of XML
XML Classes in .NET
Extending SQL Server with SQLXML 3.0 and IIS
Using XML, XSLT, and SQLXML to Create a Report
Sometime in the recent, or not so recent, past you almost certainly have encountered some example or use
of the eXtensible Markup Language (XML). In fact, installing either VS.NET or Common Language Runtime
(CLR) exposes you to XML whether you know it or not. At one point, XML was touted as a "silver bullet" that
would take care of all data-related issues regardless of platform or device. All this excitement created some
unnecessary overhead for applications using XML in the beginning. Developers started wrapping everything
they could in XML tags because it was "cool," regardless of the actual business case for or technical
reasoning behind the use of XML. The power of XML is its use of metadata and structured elements to
contain data. XML is not a programming language, as it contains no directives for application functionality.
That makes it platform-independent.
Chances are that, if you have a firm understanding of HTML, figuring out how to use XML reliably and
effectively isn't that much of a reach. For example, you can think of Namespaces in much the same way you
would name an ActiveX.dll in Visual Basic. A Namespace, represented in an element with the prefix xmlns:,
supplies a unique name for a container that provides functionality and/or datamuch the way a class name
does in Visual Basic. In VB.NET, the approach has been simplified through use of an extensive set of classes
to parse and manipulate XML. Consider any web.config file, for example. The file is XML-based versus the
"legacy" format of an .INI file, and its data is accessible by similar simple methods. The main difference is
that the web.config file is more extensible. Evidence of this difference is that the web.config file can be
accessed and manipulated exactly the same way as any other XML documentalthough the results may be
quite different.
The case for using XML or any of its relatives Simple Object Access Protocol (SOAP), for exampleis
neutrality. There will always be a case for creating an application that has logic that can be extended through
the sharing of its data, rather than requiring that each application have unique coding to access centralized
data.
In this chapter we concentrate on the use of XML for its true purpose of allowing data to "identify" itself to
any application or human being. Throughout the chapter, we describe how XML is used in the real world and
how the .NET platform uses it effectively. In Chapter 10 we present a more in-depth explanation of how to
interact with databases and DataSet s using the ADO classes.
Many books and resources devoted to XML are currently available. Therefore, instead of providing a
comprehensive treatment of the topic, we focus on how XML is integrated with the .NET platform. If you're
familiar with XML, you may want to skip ahead to the XML Classes in .NET section; otherwise, enjoy the
refresher.

[ Team LiB ]

[ Team LiB ]

An Overview of XML
XML is all about data. Specifically, it is about creating an unambiguous set of data that can contain
information that describes the datathis is metadata. For example, consider the simple HTML

<form name="frmMain" action="mypage.aspx" method="POST">


</form>

It reveals the definition of a FORM element. This element has attributes, of which name, action, and method
are but a few. Attributes describe the element's form, tell any Web browser what to do with that form, and
are the simplest example of metadata. Note the closing tag, which completes the container for the elements
within the form. FORM is a generic container with generic contents; we can create a specific instance of
FORM by placing specific data within its structure.
The following code shows how XML uses elements and attributes to describe a specific piece of data. In it we
create a fictitious person and show that person's information as an XML node. The node or element is called
Person, and the attributes describe the person.

<Person firstName="John"
lastName = "Doe"
address1 = "123 Main Street"
address2 = ""
city = "Sometown"
state = "OH"
zip = "22222"
phone = "111-242-5512"
/>

Note how all the information regarding the person is contained in the Person element. Any application that
can parse XMLand almost anyonecould look at this information and learn the person's name, address,
and telephone number. Also, as no other data is associated with this person, the closing tag syntax used is
correct. In XML, everything regarding syntax is strictforget a closing tag or leave out an element and the
entire document likely will fail.

Note
The semantics of XML are worth noting at this point. An element can be referred to as a node, and
vice versa. Also, in some documentation, an element may be referred to as a tag. To avoid
confusion we refer to them only as elements and nodes throughout this book.

Elements not only can have attributes, but they also can have subelements. Subelements may have their
own attributes, as well, as demonstrated in Listing 9.1.
Listing 9.1 Fictitious Person element with subelements

<Person firstName="John" lastName = "Doe" address1 = "123 Main Street"


address2 = "''"
city = "Sometown"
state = "OH"
zip = "22222"
phone = "111-242-5512"
<orders>
<order id="111"
itemid="2932"
itemdesc="Super Foo Widget"
</order>
</orders>
</Person>

Several things happened here. First, the Person element had to be closed because the scope of the container
changedit now holds orders. So far, the code in Listing 9.1 is neutral. It contains no application-specific
informationonly data and metadata.

Note
Many excellent references on XML are available in print and online. A great place to start is
http://www.w3.org/XML/1999/XML-in-10-points. In addition, try the following:

The World Wide Web Consortium (http://www.w3.org)Most of the specifications related to XML are
made official by the consortium and are well documented.
Microsoft's Web site for XML (http://msdn.microsoft.com)These constantly updated resources
extensively cover uses of XML and related technologies in association with Microsoft products.
TopXML/VBXML.com (http://www.vbxml.com)This site also provides many resources and code
samples for working with XML.

The XML Family of Technologies


XML doesn't stand alone; it has plenty of friends to help make it more usable.

XML Path Language (XPATH) provides a way to query information from an XML document. Although the
syntax is radically different, the concept is similar to that of an SQL query.

Extensible StyleSheet Language Transformations (XSLT) provides a language for transformingthat is,
adding, deleting, or otherwise modifyingthe data contained within one XML document into data that
can be used in another XML document. XSLT can use XPATH to obtain the data that it is to transform.
Extensible StyleSheet Language (XSL) is actually XSLT plus objects that allow the developer to
describe how the information is displayed within a browser or other application that is XSL compliant.
Document Object Model (DOM) contains a standard set of functions that allow programmatic extraction
of data from either an XML or HTML document.
Simple Object Access Protocol (SOAP) is a specification for making calls to Web Services or other Webenabled applications and services and how to format responses. We discuss SOAP in more detail in
Chapter 13.
Figure 9.1 illustrates the relationship that exists between XML, XSL, XSLT, and XPATH when we base an
application on XML. (This information will come in handy later in this chapter when we explain the .NET
classes for XML.) Note that the XML document serves as the data source; that is, it contains the data that we
want to display. An XPATH query of People/Person is used to gather all the Person elements from the XML
document. The XSL style adds the font elements around the data, giving the XSLT style sheet. After parsing
and processing, the end result is the HTML.
Figure 9.1. The XML/XSL hierarchy

Warning
XML has very strict rules and is extremely case-sensitive.

To see this process in action, insert the code from Listings 9.2 and 9.3 into two files, simple.xml and
simple.xsl, and place them in the same directory. Use Internet Explorer 6.0 or greater to open the
simple.xml file; the results, shown in HTML, should appear.
Listing 9.2 simple.xml

<?xml version='1.0' ?>


<?xml:stylesheet type="text/xsl" href="simple.xsl" ?>
<People>
<Person>
John Doe
</Person>
<Person>
Jane Doe
</Person>
</People>

Note how the style sheet is linked to the XML document. Using the .NET classes for XML or the MSXML parser
in Visual Basic, you can dynamically change the results, making multiple formats (such as WML) available for
output.
Listing 9.3 simple.xsl

<?xml version="1.0"?>
<HTML xmlns:xsl="http://www.w3.org/TR/WD-xsl">
<xsl:for-each select="People/Person">
<font face="Arial">
<xsl:value-of select="."/>
</font>
</xsl:for-each>
</HTML>

The line "select="People/Person" is an XPATH query representing an SQL equivalent of SELECT Person
FROM People. The xsl:for-each statement is a looping statement available through the XSL specification.
Unlike XML, which has no programming directives, XSL has an entire set of directives and can be enhanced
through scripting to provide additional functionality.

XML and Data Access


With this basic understanding of XML and how it can be used, we can apply it to the real world of XML and
see its impact on how an application's data is accessed. Once data has been formatted in an XML document,
any XML parser anywhere can read the data and the data will have the same meaning from one application
to the next. This flexibility reduces redundant development, which leads to savings through reduced
maintenance and support.
When the creators of .NET set out to produce the next version of Microsoft's development platform, such
flexibility was definitely a consideration. Almost everything in .NET uses XML to provide information. The CLR
uses XML-based configuration files to provide settings to applications. By default, DataSets are returned as
an XML document that can be simply written out to a string or to a document. XML Web Services are set up
to accept open standards such as SOAP, which meansfinallythat code written in an object on the
Windows platform can be extended to any other platform capable of making an HTTP or SOAP request.

By itself, XML doesn't provide any revolutionary ways of changing data access, but using it in conjunction
with known ways of collecting and storing data does. More information on using the XML features with
ADO.NET is presented in Chapter 10.

[ Team LiB ]

[ Team LiB ]

XML Classes in .NET


Within the .NET architecture, XML is used to boost productivity, create compliance with open standards, and
integrate with ADO.NET. To enable it to do so, namespaces and classes had to be created. Enumerating all
the possible namespaces, classes, methods, properties, enumerations, interfaces, and delegates in a huge
table here would be a waste of space and of time. However, in the following discussion we show how to use
several classes within the .NET Framework to work programmatically with XML. You can then begin to create
applications that effectively use XML in a manner emulating the reasons for its use by the .NET development
team at Microsoftand maybe have some fun along the way.
Throughout this section we present code samples to show you how to perform tasks related to application
development with XML, XSL, XLST, and XPATH. (See the note regarding additional references for information
about XML and the descriptions of these technologies earlier in the chapter.) Again, the main purpose of XML
is to structure and describe data. The classes provided in the .NET Framework are there to help you do just
that. The classes selected for discussion are from the System.xml namespace and are frequently used in
work with XML data. These classes are equally important to your work in VB.NET. Any additional coding
introduced is explained. Because of VB.NET's many new features, writing a program that doesn't take
advantage of at least one of them is almost impossible.

Working with the Document Object Model


Introduction of the Document Object Model made programming dynamically generated sites much easier by
allowing developers to add functionality to static elements from HTML/XML documents. Consider the simple
JavaScript of document.location: This line navigates through the collection of elements associated with a
Web page, starting at the top with the document object and finding its location property. In some sense
this programmatic access to objects contained on a Web page was revolutionary. Then came XML with its
strict rules and basically a dynamic object model based on what the author of the document had determined
would be the root, or parent element.
We generated the file simple2.xml to provide an XML document for use in the rest of this chapter. We
created it by using SQLXML, which we explain later in this chapter, and basically just saving the results with
an .xml file extension. Typing line after line of HTML is no one's idea of fun, and we don't have to be
concerned with formatting the data just yet. Moreover, we don't have to worry about any extraneous
information being placed in the document, as can happen when we're using certain HTML editors and
generators.
Listing 9.4 shows our simple2.xml file as generated. The location of the line breaks doesn't really matter
because, like C++, C#, Java, and JavaScript, the lines of code are delimited. In those programming
languages a semicolon (;) is used to end each line; in XML an end tag is used. Anything between the open
tag (<) and the close tag (/>) is processed as one line, even if subelements fall between them.
Listing 9.4 simple2.xml

<?xml version="1.0" encoding="utf-8" ?>


<customer>
<tblCustomer ID="1" FirstName="Carole" LastName="Vermeren"

Address="6227 East Crossing Drive" City="Ocean Glen" State="NH"


PostalCode="98609" Phone="6034485994" />
<tblCustomer ID="2" FirstName="Cathy" LastName="Johnson"
Address="1629 West River Street" City="Big Center" State="NC"
PostalCode="18602" Phone="9193669205" />
<tblCustomer ID="3" FirstName="Eric" LastName="Haglund"
Address="9193 West Beach Street" City="Brown Heights" State="OK"
PostalCode="83481" Phone="4059310689" />
<tblCustomer ID="4" FirstName="Julie" LastName="Ryan"
Address="9161 Fort Beach Way" City="South Point" State="KY"
PostalCode="26973" Phone="5025245220" />
<tblCustomer ID="5" FirstName="Richard" LastName="Halpin"
Address="9790 Happy River Street" City="North Lake" State="MN"
PostalCode="62875" Phone="6124066311" />
<tblCustomer ID="6" FirstName="Kathleen" LastName="Johnson"
Address="9385 West Heights Street" City="Brown Towne" State="MI"
PostalCode="59609" Phone="3138032214" />
<tblCustomer ID="7" FirstName="Sorel" LastName="Polito"
Address="2104 Brown Brook Drive" City="Blue Valley" State="MT"
PostalCode="54401" Phone="4067260212" />
<tblCustomer ID="8" FirstName="Sorel" LastName="Terman"
Address="1920 West Point Street" City="Blue Bluffs" State="WI"
PostalCode="08965" Phone="6086246867" />
<tblCustomer ID="9" FirstName="Randy" LastName="Hobaica"
Address="4619 North Plains Drive" City="Brown Ridge" State="CT"
PostalCode="09793" Phone="2039421728" />
<tblCustomer ID="10" FirstName="Matthew" LastName="Haglund"
Address="8725 Sunset Crossing Avenue" City="New Brook"
State="AR" PostalCode="79013" Phone="5014589191" />
</customer>

The information in Listing 9.4 is from the Novelty database created earlier in Chapters 13. The query used
to get this information is

SELECT TOP 10 * FROM tblCustomer FOR XML AUTO

We added the <customer> element manually here. Later, in the section on SQLXML, we show how to set the
root element of an XML document automatically.
The first class related to the DOM that we consider is the XMLDocument. Without it, you won't get very far
using XML data or documents. In its simplest case, XML data, either in a document or in an in-memory
string, is loaded by calling the XMLDocument.Load() method.

Working with XPATH


Now that we have the document loaded, what do we do with it? Within the System.xml namespace are the
xmlNode and xmlNodeList classes. Using these classes and a little XPATH is all we need to read through an
XML document that we've loaded and fetch the data that we're interested in. Listing 9.5 shows a simple

VB.NET application that simply loads our simple2.xml document and prints all the FirstName attributes to a
textbox control.
Listing 9.5 XMLDocument and XMLNode sample

Imports System.Xml
Imports System.Xml.XPath
Imports System.IO
Public Class Form1
Inherits System.Windows.Forms.Form
. . . (Generated code removed from listing)
Private Sub Form1_Load (ByVal sender As System.Object,_
ByVal e As System.EventArgs) Handles MyBase.Load
Dim xDoc As New XmlDocument()
xDoc.Load("simple2.xml")
'Note the XPATH query syntax for getting to an attribute.
Dim xNodeList As XmlNodeList = _
xDoc.SelectNodes("descendant::tblCustomer/@FirstName")
Dim xNode As XmlNode
For Each xNode In xNodeList
lstResults.AppendText(vbCrLf & xNode.InnerText)
Next
End Sub
End Class

To execute this code, create a new VB.NET executable project and on the form create a listbox, lstResults.
Place the code from Listing 9.5 in the form's Load event. When you execute the code, provided the
simple2.xml file is in the same directory as your application, you should get results similar to those shown in
Figure 9.2.
Figure 9.2. Results of Listing 9.5

As we've just demonstrated, loading the XML and navigating through it isn't a problemit really is that easy.
Next we ask, How do we change the data that's in the document? The answer comes from the same
combination of classes we used earlier.

Note
To keep things clean and bring us a little closer to the real world, in the VB.NET project created for
Listing 9.5, we move the code inside the Form_Load routine to a private subelement, ShowTop10.
We show it in Listing 9.6 shortly.

Listing 9.6 shows the modified Visual Basic application, reflecting changing the node's value and saving it to
the XML file. This example is fairly simple, but the techniques used are extremely efficient and reliable when
it comes to modifying an XML document. The modifications from the original code also include the addition of
two command buttons, btnShowTop10 and btnChangeAndSave. We added these buttons as a simple way to
control which routine gets executed. After adding this code to your Visual Basic executable project and being
sure that you have the simple2.xml file in the application's bin directory, start the application and click on
the ShowTop10 button. The results should be similar to those shown in Figure 9.2. To change one of the
values, double-click on the item in the listbox that you want to change. An input box will prompt you to enter
a new value for the item you selected. Clicking on OK in the input box changes the XML document and
reloads the values are in the listbox to show the changes you made.
Listing 9.6 Completed XMLDomSample application code

Imports System.Xml
Imports System.Xml.XPath
Imports System.IO

Public Class Form1


Inherits System.Windows.Forms.Form
. . . (Generated code removed from listing)
Private Sub Form1_Load(ByVal sender As System.Object
ByVal e As System.EventArgs) Handles MyBase.Load
End Sub
Private Sub ShowTop10()
'This is the code used in Listing 9.5.
Dim xDoc As New XmlDocument()
xDoc.Load("simple2.xml")
'Note the XPATH syntax used to get the attibute of an element.
Dim xNodeList As XmlNodeList = xDoc.SelectNodes
("descendant::tblCustomer/@FirstName")
Dim xNode As XmlNode
Dim i As Integer = 0
For Each xNode In xNodeList
lstResults.Items.Insert(i, xNode.InnerText)
i = i + 1
Next
End Sub
Public Sub ChangeNameandSave(ByVal NameToChange As String,
ByVal ChangeTo As String)
Dim xDoc As New XmlDocument()
xDoc.Load("simple2.xml")
Dim xNodeList As XmlNodeList = xDoc.SelectNodes
("descendant::tblCustomer/@FirstName")
Dim xNode As XmlNode
For Each xNode In xNodeList
If xNode.InnerText = NameToChange Then
xNode.Value = ChangeTo
End If
Next
xDoc.Save("simple2.xml")
MsgBox("Name change saved !", 0)
lstResults.Items.Clear()
ShowTop10()
End Sub
Private Sub btnShowTop10_Click(ByVal sender As System.Object,
ByVal e As System.EventArgs) Handles btnShowTop10.Click
ShowTop10()
End Sub
Private Sub lstResults_DoubleClick(ByVal sender As Object,
ByVal e As System.EventArgs) Handles lstResults.DoubleClick
Dim oldName As String = lstResults.GetItemText
(lstResults.Items.Item(lstResults.SelectedIndex))
Dim newName As String = InputBox_
("Please enter a new name", "ChangeAndSave")
ChangeNameandSave(oldName, newName)
End Sub
End Class

Note
The complete code listings for this chapter are available from the publisher's Web
sitehttp://www.awprofessional.com.

[ Team LiB ]

[ Team LiB ]

Extending SQL Server with SQLXML 3.0 and IIS


SQLXML 3.0 provides a way of gathering data and generating XML documents. Although it is no substitute
for what can be achieved with ADO.NET, it is a way to extend the functionality of SQL Server through a
browser interface for creating reports, monitoring activity, and doing other tasks.
To use SQLXML, you must have Internet Information Server (IIS) and SQL Server 2000 installed, as a
version of SQLXML installs with SQL Server 2000. Our environment consists of Windows XP Professional, IIS
5.1, and SQLXML 3.0.
SQLXML extends the functionality of SQL Server 2000 by allowing you to query the database through an
HTTP request. This capability has many advantages along with some security concerns. Most of these
concerns can be lessened by enforcing Windows Authentication on both the virtual directories that will
execute queries and the database itself.

Note
You can download SQLXML 3.0 free of charge from Microsoft at
http://msdn.microsoft.com/downloads/default.asp?url=/downloads/sample.asp?url=/msdnfiles/027/001/824/msdncompositedoc.xml

Installing and Configuring SQLXML 3.0


Once you have downloaded SQLXML, simply double-click on the downloaded executable file to begin
installation. From there the Installation Wizard is intuitive and asks only a couple of questions: Do you agree
to the license? and Do you want a custom or typical installation? The only difference between a custom and a
typical installation is the ability to change the directory the files will be placed in during installation.
Configuring SQLXML is relatively simple. To begin, navigate through your Start Menu to Programs, Microsoft
SQL Server, Configure SQL XML Support in IIS. If all is well, an MMC Console similar to that shown in Figure
9.3 will appear.
Figure 9.3. IIS Virtual Directory Manager for SQL Server

Note that the pane on the right side has only one column, Computer. Double-clicking on the computer name
listed there will expand a list of all of the Web sites running on the machine, as illustrated in Figure 9.4. Also
note that the column name has changed to Web Site Name.
Figure 9.4. List of Web sites

Double-clicking on the Default Web Site on either pane displays a list of configured virtual directories on the
right. Figure 9.5 shows this window.
Figure 9.5. Virtual Directory pane, with no configured directories

To begin configuring a new virtual directory, right-click on the Default Web Site node and from the context
menu select New, Virtual Directory, as shown in Figure 9.6.
Figure 9.6. Context menu selection

Before completing the next step, create the directory c:\inetpub\wwwroot\novelty. You can do so by using
Explorer or clicking on Start, Run and typing "cmd". In the resulting command window, type "mkdir
c:\inetpub\wwwroot\novelty", and press Enter. Within that directory, make a subdirectory and name it
Templates. (All these steps will make sense shortly).

Note
Unless otherwise noted, all commands in quotes are meant to be typed without the quotes around
them.

Once you've selected a virtual directory, a new dialog is presented with six tabs across the top. The first tab,
General, asks you to specify a name for the virtual directory, as well as a directory to hold any files that you
may want to show. Type "Novelty" in the textbox within the frame labeled "Virtual Directory Name". Next,
type or click on Browse to locate and set a local directory for the virtual directory to use. Although it won't
necessarily contain files, the directory must exist. Then type "c:\inetpub\wwwroot\novelty" as the local path,
as shown in Figure 9.7
Figure 9.7. Setting the Virtual Directory name and local path

Next, click on the Security tab at the top of the window and select the option Use Windows Integrated
Authentication. This step is based on the assumption that SQL Server is set up to use either Mixed-Mode
Authentication or Windows Authentication. If your server isn't set up this way, you can use either of the
other two options, depending on the level of security you expect. If security isn't a big issuefor example, if
the server you're working on isn't connected to any kind of external networkthe first option will work well.
It allows you to cache credentials in much the same way a connection string works. The third option uses
HTTP-Based Basic Authentication to authenticate the user, based on the SQL Server account. Figure 9.8
shows the suggested configuration.
Figure 9.8. Security tab settings

Now click on the Data Source tab. This dialog asks you to indicate the instance of SQL Server that should be
connected to and the database being accessed. In the SQL Server frame is a textbox; enter the name of the
SQL Server you want to connect toin this case "(local)" works just fine.

Note
The use of "(local)" is a "friendly name" for the server running on the local machine. If you have
replication configured on your SQL Server, either as a subscriber or a publisher, the friendly
names won't work and you'll have to use the actual machine name instead.

Next, in the frame labeled "Database", click on the down arrow of the drop-down select list to select the
Novelty database. Note that the databases listed reflect what the credentials provided in the security
settings have rights to. This window is shown in Figure 9.9.
Figure 9.9. Data Source settings

Now click on the Settings tab at the top of the window. Be sure that the selections for Allow URL queries,
Allow template queries, and Allow XPATH are selected as shown in Figure 9.10.
Figure 9.10. Settings options

Finally, click on the Virtual Names tab. In the frame labeled "Defined Virtual Names", click on the New
button. A new dialog is presented. For the Virtual name field, enter "templates". From the Type select box,
select template and in the Path textbox, type "c:\inetpub\wwwroot\templates" or click on the " " button
and browse to that location. Figure 9.11 shows this dialog box. When you've filled in everything, click on
Save.
Figure 9.11. Virtual Name Configuration dialog

A window similar to that shown in Figure 9.12 should appear. Click on OK to close the New Virtual Directory
window.
Figure 9.12. Virtual Names tab configured

You now have successfully configured a virtual directory through IIS that can execute SQL queries against an
SQL Server database. If you double-click on the Default Web Site node, a window similar to that shown in
Figure 9.13 should appear. Now close the IIS Virtual Directory Manager for SQL Server, as it is no longer
needed.
Figure 9.13. IIS Virtual Directory Manager for SQL Server with Novelty site configured

Configuration Results
Now let's take a look at what that entire configuration has allowed you to do. Open Internet Explorer 6.0 or
higher and type the URL shown in the following code. The results will look exactly like the XML document
shown in Listing 9.4. We created the simple2.xml file in exactly this way. Once we had visited the URL,
performing a simple Save As from the File menu in Internet Explorer created the file:

http://localhost/Novelty?sql=select top 10 *
from tblCustomer FOR XML AUTO&root=customer

Note that the root parameter at the end of the URL specifies what the root element of the document will be.
Without it, you'll have a hard time displaying HTTP queries in a Web browser.

[ Team LiB ]

[ Team LiB ]

Using XML, XSLT, and SQLXML to Create a Report


It's time to put everything presented in this chapter together in a practical context. We do so by generating
a list of customer addresses. As you will see in the code listings, changing the HTML content within the XSL
file isn't difficult, and we could just as easily format this page to print in a special way or be used in
conjunction with JavaScript/ASP.NET to provide robust functionality.
Basically, we need two things from SQL Serverdata and the data as XML format. For that, we use a
template to store the query. The following code shows the contents of the noveltytemplate.xml file. The sole
purpose of this file is to collect data and assign a style sheet.

<?xml version = '1.0' encoding= 'UTF-8'?>


<root xmlns:sql='urn:schemas-microsoft-com:xml-sql'
sql:xsl='noveltyxsl.xsl'>
<sql:query>
SELECT FirstName, LastName, Address, City,
State FROM tblCustomer FOR XML AUTO
</sql:query>
</root>

The first line of code establishes a basic XML document. This line also shows a way of linking the style sheet
to the XML documentas with the xml:stylesheet element used in Listing 9.2. The next element, sql:query,
is the container for the SQL command or query that we want to execute; note the use of FOR XML AUTO
again. The FOR XML statement tells SQL Server to return the results as XML. In a template, we assume that
the root element is called "root", so we don't need to specify that in the template query.

Note
If you want to assign a style sheet dynamically when using SQLXML, append "xsl=" to the URL as
a query string and specify the file to usefor example, http://<machinename>/<templates virtual
directory>/<template filename.xml?>xsl=<xslfilename.xsl.>

Listing 9.7 shows the XSL style sheet applied to the resulting XML. We use XPATH to specify the attribute we
want to collect. Using standard CSS syntax and HTML, we format the information to be viewed in a browser.
Listing 9.7 noveltyxsl.xsl complete

<?xml version= '1.0' encoding= 'UTF-8'?>


<xsl:stylesheet xmlns:xsl='http://www.w3.org/1999/XSL/Transform'
version="1.0">

<xsl:template match = '*'>


<xsl:apply-templates />
</xsl:template>
<!Unless otherwise specified, the child elements
will be the name of the table queried->
<xsl:template match = 'tblCustomer'>
<TR>
<!Notice the use of XPATH to collect the fields>
<TD><xsl:value-of select = '@FirstName' /></TD>
<TD><xsl:value-of select = '@LastName' /></TD>
<TD><xsl:value-of select = '@Address' /></TD>
<TD><xsl:value-of select = '@City' /></TD>
<TD><xsl:value-of select = '@State' /></TD>
</TR>
</xsl:template>
<xsl:template match = '/'>
<HTML>
<HEAD>
<STYLE>th { background-color: #00008C; color: #ffffff;}
td {font-family: Arial}</STYLE>
</HEAD>
<BODY>
<TABLE border='1' style='width:600;'>
<TR><TH colspan='9'>Customers</TH></TR>
<TR>
<TH>First name</TH>
<TH>Last name</TH>
<TH>Address</TH>
<TH>City</TH>
<TH>State</TH>
</TR>
<xsl:apply-templates select = 'root' />
</TABLE>
</BODY>
</HTML>
</xsl:template>
</xsl:stylesheet>

Place the two files, noveltytemplate.xml and noveltyxsl.xsl, in the templates directory of the Novelty virtual
Web created in the Installing and Configuring SQLXML 3.0 section earlier in this chapter. Once the files are in
place, open Internet Explorer 6.0 or higher and navigate to the following URL, assuming of course that you
are running everything locally.
http://localhost/novelty/templates/noveltytemplate.xml?contenttypeext/html
Note the contenttype parameter added to the URL, which specifies that the end result will be an HTML page.
You should now have a page that looks like the one shown in Figure 9.14.
Figure 9.14. Results of XML template execution with XSL

[ Team LiB ]

[ Team LiB ]

Summary
In this chapter we presented the basics of XML and its purpose. We also demonstrated some of the ways
that XML can be used within the .NET Framework and how to work with XML from VB.NET. By the end of the
chapter, we had shown you how to configure IIS and SQL Server 2000 to return XML documents that can be
easily manipulated to produce HTML pages. Much of the information covered here is background for the
material presented in Chapter 10.

Questions and Answers

Q1:

What exactly does "create a schema" mean in XML lingo?

A1:

As in database lingo, creating a schema refers to a document that defines objects and entities. In
XML, this concept can be extended to include schemas that require certain information to be
included in a document, similar to defining a NOT NULL field in a database table.

Q2:

What if I want to name an element in my XML document the same as the name of an
HTML element? Say "title", for example.

A2:

This is where namespaces come in to play, as they are available for you to define a custom
namespace within your document and then reference its object separately. For example, you've
created an XML document to which you want to apply XSLT, where one of the fields returned
from the database is labeled "title". Because HTML already has the title tag reserved, you need
to create your own namespace.
Take a look at the line of code

<xmlns:b=http://myMachine.com>

It allows you to prefix any XML element with "b:", and it won't interfere with any of the HTML
reserved words.
The line <b:title> and <title> are now two different entities. The only hard and fast rule that you
must be comply with here is to ensure that the namespace declaration occurs somewhere near
the beginning of the document, before you declare any conflicting elements.

[ Team LiB ]

[ Team LiB ]

Chapter 10. ADO.NET and XML


IN THIS CHAPTER

Basic Reading and Writing of XML


Creating an XmlReader from a Command Object
The XmlDataDocument Object
Setting aside the marketing hype, XML does address a lot of real development and business problems. That's
especially true as the integration of diverse systems and platforms, both within a company and between
companies, is becoming increasingly important. One of the fundamental design goals for Visual Studio.NET
was to offer extensive support for XML from the ground up.
XML is the persistence and transmission format for the DataSet . That is, when a DataSet is saved to disk,
the format that it's saved in is XML, rather than some proprietary and/or binary format. Also, when a
DataSet is passed from one computer or process to another, it is passed as an XML stream.
In earlier chapters, we showed that the DataSet doesn't know or care about the source of the data that it
contains. As far as it is concerned, data is data, regardless of where it comes from. The same is true when
the data source is XML. The DataSet offers flexible support for reading and writing XML data and/or schema
information. This support goes well beyond the simple support that was patched onto previous versions of
ADO. Moreover, together with the XmlDataDocument, an application can both view and manipulate data in a
DataSet by using relational tools, XML tools, or both, depending on the particular situation.
Using XML, along with its related technologies and tools, is a very broad topic, most of which is beyond the
scope of this book. In Chapter 9 we presented the basics of the extensive XML support provided by the .NET
Framework. In this chapter we present the basics of integration between ADO.NET and XML. Further
information on these topics can be found in the help files for VS.NET and the .NET Framework and at the
MSDN Web site, http://msdn.microsoft.com.

[ Team LiB ]

[ Team LiB ]

Basic Reading and Writing of XML


In Chapters 5 and 6 we demonstrated how to fill a DataSet with data either programmatically or from a
database. Another method for loading data into a DataSet is reading in XML. As you would expect, you can
also write the data in a DataSet as XML data. Further, the DataSet allows you to read and write XML
schemas, either together with the XML data or separately.

Reading XML
ADO.NET provides rich and varied support for reading and writing both XML and XML schemas. Let's take a
look at the basic use and operation of these methods and the properties that work with them.
As we've done in previous chapters, we'll build a simple form to demonstrate the fundamentals of working
with ADO.NET and XML. Later in this chapter, in the Business Case, we illustrate a real-world use of ADO.NET
and XML. Follow these steps as we build the form.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.

Create a new Visual Basic Windows Application project.


Name the project ADO-XML.
Specify a path for for saving the project files.
Enlarge the size of Form1.
In the Properties window for Form1, set its Name property to frmXML and its Text property to ADO.NET
and XML.
In the upper-left-hand corner of the form, add a button from the Windows Forms tab of the Toolbox.
In the Properties window, set the Name property of the button to btnReadXML and set the Text
property to Read XML.
From the Windows Forms tab of the Toolbox, add a DataGrid to frmXML and place it on the right side
of the form.
In the Properties window, set the Name property of the DataGrid to grdData.
Enlarge the DataGrid so that it covers about 80 percent of the area of the form.

As usual, we add the following to the top of the file and then add the routine shown in Listing 10.1 to the
frmXML class definition.

Imports System
Imports System.Data
Imports System.Data.SqlClient

Listing 10.1 Reading the contents of an XML file into a DataSet

Private Sub btnReadXML_Click(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles btnReadXML.Click
Dim dsPubs As New DataSet()

' Read in XML from file.


dsPubs.ReadXml("..\Pubs.xml")
' Bind DataSet to Data Grid.
grdData.DataMember = "publishers"
grdData.DataSource = dsPubs
End Sub

This function reads the XML data from the file Pubs.xml into the DataSet . At this point, the DataSet and
its data can be accessed in any of the ways that we have discussed in earlier chapters. In addition, this
routine then binds the DataSet to a DataGrid . Listing 10.2 shows the contents of the file pubs.xml. Figure
10.1 shows the data displayed in a DataGrid .
Figure 10.1. The contents of the file Pubs.xml displayed in a DataGrid

Listing 10.2 The contents of the file Pubs.xml

<?xml version="1.0" standalone="yes"?>


<NewDataSet>
<publishers>
<pub_id>0736</pub_id>
<pub_name>New Moon Books</pub_name>
<city>Boston</city>
<state>MA</state>
<country>USA</country>

</publishers>
<publishers>
<pub_id>0877</pub_id>
<pub_name>Binnet &amp; Hardley</pub_name>
<city>Washington</city>
<state>DC</state>
<country>USA</country>
</publishers>
<publishers>
<pub_id>1389</pub_id>
<pub_name>Algodata Infosystems</pub_name>
<city>Berkeley</city>
<state>CA</state>
<country>USA</country>
</publishers>
<publishers>
<pub_id>1622</pub_id>
<pub_name>Five Lakes Publishing</pub_name>
<city>Chicago</city>
<state>IL</state>
<country>USA</country>
</publishers>
<publishers>
<pub_id>1756</pub_id>
<pub_name>Ramona Publishers</pub_name>
<city>Dallas</city>
<state>TX</state>
<country>USA</country>
</publishers>
<publishers>
<pub_id>9952</pub_id>
<pub_name>Scootney Books</pub_name>
<city>New York</city>
<state>NY</state>
<country>USA</country>
</publishers>
<publishers>
<pub_id>9999</pub_id>
<pub_name>Lucerne Publishing</pub_name>
<city>Paris</city>
<country>France</country>
</publishers>
</NewDataSet>

Note
When ReadXML is used to load a DataSet , the RowState property of all of the (new) rows is set
to Added. This approach is different from the default behavior when a DataAdapter is used to
load a DataSet from a database, where the RowState property of all the rows is set to

Unchanged. This approach allows the data to be loaded from an XML source and then inserted into
a database table. If you don't want to do that, you can reset the RowState to Unchanged by
calling the AcceptChanges method. If you want to change the default behavior when loading a
DataSet from a database, setting DataAdapter 's AcceptChangesOnFill property to False will
cause the newly added rows to have a RowState of Added.

The preceding example demonstrates the simplest form of reading XML data into a DataSet reading it
from a file. There are numerous other forms (function overloads) of this method for reading XML, including
using a Stream , a TextReader , or an XmlReader . A parallel set of ReadXml methods also accepts a
second parameter that specifies the value of XmlReadMode to use. This parameter is used to specify how to
interpret the contents of the XML source, and how to handle the data's schema. Table 10.1 shows the
XmlReadMode enumeration and describes the possible values.

ReadSchema
Reads any existing inline schema, loading both the data and the schema into the DataSet . Tables defined
in the schema are added to the DataSet , but an exception is thrown if the schema defines a table that is
already defined in the DataSet schema.

IgnoreSchema
Ignores any existing inline schema, loading the data into the DataSet by using the DataSet 's existing
schema definition. Any data that does not match the DataSet 's schema is ignored and not loaded.
Similarly, if no schema is defined, no data is loaded.

InferSchema
Ignores any existing inline schema and infers the schema from the structure of the data, and then loads the
data into the DataSet . Tables and columns inferred from the data are added to the schema repeated in the
DataSet . If they conflict with the existing definitions, an exception is thrown.

Fragment
Reads all existing XML fragments and loads the data into the DataSet . Any data that does not match the
DataSet 's schema is ignored and not loaded.

DiffGram
Reads a DiffGram and loads the data into the DataSet . New rows are merged with existing rows when the
unique identifier values match; otherwise new rows are just added to the DataSet . If the schemas don't
match, an exception is thrown.

Auto
The default mode. The most appropriate of the following options is performed: (1) if the XML data is a
DiffGram , the XmlReadMode is set to DiffGram ; (2) if a schema is defined in the DataSet or inline as
part of the XML document, the XmlReadMode is set to ReadSchema; and (3) otherwise, the XmlReadMode is
set to InferSchema .

Table 10.1. The XmlReadMode Enumeration


Enumeration Member Name

Description

A separate (overloaded) method of the DataSet , ReadXmlSchema , is available to read in just the schema
information and not the actual data. It can be used simply to read the schema of a DataSet's DataTable
(s), as in

MyDataSet.ReadXmlSchema ("MySchemaFile.xml")

The same four sources of data (file, Stream , TextReader , and XmlReader ) are also available for the
ReadXmlSchema method. The DataSet also has analogous sets of methods for the WriteXml and
WriteXmlSchema methods, as described next.

Writing XML
Once we have loaded data and/or schema information into the DataSet , regardless of how or from where it
was loaded, it can be written as XML and/or XML schemas. Follow along as we continue with the form
frmXML we prepared earlier.
1. Add a button immediately below the btnReadXML button from the Windows Forms tab of the Toolbox.
2. In the Properties window, set the Name property of the button to btnWriteXML and set the Text
property to Write XML.
3. Add the code shown in Listing 10.3 .
Listing 10.3 Code to save the contents of a DataSet as an XML file

Private Sub btnWriteXML_Click(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles btnWriteXML.Click
Dim dsSales As New DataSet()
Dim cn As New SqlConnection _
("data source=localhost;initial cata log=pubs;user id=sa")
Dim daAuthors As New SqlDataAdapter("select * from sales", cn)
Dim daPublishers As New SqlDataAdapter("select * from stores", cn)
' Load Relational data from database.
daAuthors.Fill(dsSales, "Sales")
daPublishers.Fill(dsSales, "Stores")
'Write XML out to file.
dsSales.WriteXml("..\StoreSales.xml")
End Sub

The btnWriteXML_Click routine initializes two DataAdapters and then uses them to fill the dsPubs DataSet
with the data from two tables in the SQL Server sample database "pubs". Listing 10.4 shows the contents of

the file StoreSales.xml that this routine creates. Note that the XML document first contains the sales records
and then, afterward, the stores' records. This approach makes sense because no relationship has been
defined between the two tables. In cases where tables are related, you'll want the records to be nested. We
give an example of nesting records later, in Business Case 10.1.
Listing 10.4 The contents of the file StoreSales.xml

<?xml version="1.0" standalone="yes"?>


<NewDataSet>
<Sales>
<stor_id>6380</stor_id>
<ord_num>6871</ord_num>
<ord_date>1994-09-14T00:00:00.0000000+02:00</ord_date>
<qty>5</qty>
<payterms>Net 60</payterms>
<title_id>BU1032</title_id>
</Sales>
<Sales>
<stor_id>6380</stor_id>
<ord_num>722a</ord_num>
<ord_date>1994-09-13T00:00:00.0000000+02:00</ord_date>
<qty>3</qty>
<payterms>Net 60</payterms>
<title_id>PS2091</title_id>
</Sales>
<Sales>
<stor_id>7066</stor_id>
<ord_num>A2976</ord_num>
<ord_date>1993-05-24T00:00:00.0000000+02:00</ord_date>
<qty>50</qty>
<payterms>Net 30</payterms>
<title_id>PC8888</title_id>
</Sales>
<Sales>
<stor_id>7066</stor_id>
<ord_num>QA7442.3</ord_num>
<ord_date>1994-09-13T00:00:00.0000000+02:00</ord_date>
<qty>75</qty>
<payterms>ON invoice</payterms>
<title_id>PS2091</title_id>
</Sales>
<Sales>
<stor_id>7067</stor_id>
<ord_num>D4482</ord_num>
<ord_date>1994-09-14T00:00:00.0000000+02:00</ord_date>
<qty>10</qty>
<payterms>Net 60</payterms>
<title_id>PS2091</title_id>
</Sales>
<Sales>
<stor_id>7067</stor_id>

<ord_num>P2121</ord_num>
<ord_date>1992-06-15T00:00:00.0000000+02:00</ord_date>
<qty>40</qty>
<payterms>Net 30</payterms>
<title_id>TC3218</title_id>
</Sales>
<Sales>
<stor_id>7067</stor_id>
<ord_num>P2121</ord_num>
<ord_date>1992-06-15T00:00:00.0000000+02:00</ord_date>
<qty>20</qty>
<payterms>Net 30</payterms>
<title_id>TC4203</title_id>
</Sales>
<Sales>
<stor_id>7067</stor_id>
<ord_num>P2121</ord_num>
<ord_date>1992-06-15T00:00:00.0000000+02:00</ord_date>
<qty>20</qty>
<payterms>Net 30</payterms>
<title_id>TC7777</title_id>
</Sales>
<Sales>
<stor_id>7131</stor_id>
<ord_num>N914008</ord_num>
<ord_date>1994-09-14T00:00:00.0000000+02:00</ord_date>
<qty>20</qty>
<payterms>Net 30</payterms>
<title_id>PS2091</title_id>
</Sales>
<Sales>
<stor_id>7131</stor_id>
<ord_num>N914014</ord_num>
<ord_date>1994-09-14T00:00:00.0000000+02:00</ord_date>
<qty>25</qty>
<payterms>Net 30</payterms>
<title_id>MC3021</title_id>
</Sales>
<Sales>
<stor_id>7131</stor_id>
<ord_num>P3087a</ord_num>
<ord_date>1993-05-29T00:00:00.0000000+02:00</ord_date>
<qty>20</qty>
<payterms>Net 60</payterms>
<title_id>PS1372</title_id>
</Sales>
<Sales>
<stor_id>7131</stor_id>
<ord_num>P3087a</ord_num>
<ord_date>1993-05-29T00:00:00.0000000+02:00</ord_date>
<qty>25</qty>
<payterms>Net 60</payterms>

<title_id>PS2106</title_id>
</Sales>
<Sales>
<stor_id>7131</stor_id>
<ord_num>P3087a</ord_num>
<ord_date>1993-05-29T00:00:00.0000000+02:00</ord_date>
<qty>15</qty>
<payterms>Net 60</payterms>
<title_id>PS3333</title_id>
</Sales>
<Sales>
<stor_id>7131</stor_id>
<ord_num>P3087a</ord_num>
<ord_date>1993-05-29T00:00:00.0000000+02:00</ord_date>
<qty>25</qty>
<payterms>Net 60</payterms>
<title_id>PS7777</title_id>
</Sales>
<Sales>
<stor_id>7896</stor_id>
<ord_num>QQ2299</ord_num>
<ord_date>1993-10-28T00:00:00.0000000+02:00</ord_date>
<qty>15</qty>
<payterms>Net 60</payterms>
<title_id>BU7832</title_id>
</Sales>
<Sales>
<stor_id>7896</stor_id>
<ord_num>TQ456</ord_num>
<ord_date>1993-12-12T00:00:00.0000000+02:00</ord_date>
<qty>10</qty>
<payterms>Net 60</payterms>
<title_id>MC2222</title_id>
</Sales>
<Sales>
<stor_id>7896</stor_id>
<ord_num>X999</ord_num>
<ord_date>1993-02-21T00:00:00.0000000+02:00</ord_date>
<qty>35</qty>
<payterms>ON invoice</payterms>
<title_id>BU2075</title_id>
</Sales>
<Sales>
<stor_id>8042</stor_id>
<ord_num>423LL922</ord_num>
<ord_date>1994-09-14T00:00: 00.0000000+02:00</ord_date>
<qty>15</qty>
<payterms>ON invoice</payterms>
<title_id>MC3021</title_id>
</Sales>
<Sales>
<stor_id>8042</stor_id>

<ord_num>423LL930</ord_num>
<ord_date>1994-09-14T00:00: 00.0000000+02:00</ord_date>
<qty>10</qty>
<payterms>ON invoice</payterms>
<title_id>BU1032</title_id>
</Sales>
<Sales>
<stor_id>8042</stor_id>
<ord_num>P723</ord_num>
<ord_date>1993-03-11T00:00: 00.0000000+02:00</ord_date>
<qty>25</qty>
<payterms>Net 30</payterms>
<title_id>BU1111</title_id>
</Sales>
<Sales>
<stor_id>8042</stor_id>
<ord_num>QA879.1</ord_num>
<ord_date>1993-05-22T00:00: 00.0000000+02:00</ord_date>
<qty>30</qty>
<payterms>Net 30</payterms>
<title_id>PC1035</title_id>
</Sales>
<Stores>
<stor_id>6380</stor_id>
<stor_name>Eric the Read Books</stor_name>
<stor_address>788 Catamaugus Ave.</stor_address>
<city>Seattle</city>
<state>WA</state>
<zip>98056</zip>
</Stores>
<Stores>
<stor_id>7066</stor_id>
<stor_name>Barnum's</stor_name>
<stor_address>567 Pasadena Ave.</stor_address>
<city>Tustin</city>
<state>CA</state>
<zip>92789</zip>
</Stores>
<Stores>
<stor_id>7067</stor_id>
<stor_name>News &amp; Brews</stor_name>
<stor_address>577 First St.</stor_address>
<city>Los Gatos</city>
<state>CA</state>
<zip>96745</zip>
</Stores>
<Stores>
<stor_id>7131</stor_id>
<stor_name>Doc-U-Mat: Quality Laundry and Books</stor_name>
<stor_address>24-A Avogadro Way</stor_address>
<city>Remulade</city>
<state>WA</state>

<zip>98014</zip>
</Stores>
<Stores>
<stor_id>7896</stor_id>
<stor_name>Fricative Bookshop</stor_name>
<stor_address>89 Madison St.</stor_address>
<city>Fremont</city>
<state>CA</state>
<zip>90019</zip>
</Stores>
<Stores>
<stor_id>8042</stor_id>
<stor_name>Bookbeat</stor_name>
<stor_address>679 Carson St.</stor_address>
<city>Portland</city>
<state>OR</state>
<zip>89076</zip>
</Stores>
</NewDataSet>

The overloaded WriteXml methods include a set that has a second parameter, XmlReadMode . This
parameter is used to specify how to write the data and schema contents of the DataSet . Table 10.2
describes the XmlWriteMode enumeration values.

DiffGram
Writes the DataSet contents as a DiffGram , with both original and current values for all rows.

WriteSchema
Writes the DataSet contents as XML data, including an inline XML schema. If there is a schema, but no data,
the schema is written. If the DataSet does not have a schema defined, nothing is written.

IgnoreSchema
The default mode. Writes the DataSet contents as XML data, without a schema.

Table 10.2. The XmlWriteMode Enumeration


Enumeration Member Name

Description

Note
The DataSet also has a method GetXml . This method returns a string of XML representing the
data in the DataSet . It has the same effect as calling WriteXml with the XmlWriteMode set to
IgnoreSchema . Fetching the data as a string may often be more flexible, but doing so requires
more effort if all you want to do is to write the data to a file.

To write the DataSet 's schema as an independent XSD schema file (instead of inline with the data), use the
WriteSchema method:

dsSales.WriteXmlSchema("..\StoreSales.xsd")

Listing 10.5 shows the contents of the resulting StoreSales.xsd file.


Listing 10.5 The contents of StoreSales.xsd, which is the schema of the dsSales DataSet

<?xml version="1.0" standalone="yes"?>


<xs:schema id="NewDataSet" xmlns="" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:
msdata="urn:schemas-microsoft-com:xml-msdata">
<xs:element name="NewDataSet" msdata:IsDataSet="true">
<xs:complexType>
<xs:choice maxOccurs="unbounded">
<xs:element name="Sales">
<xs:complexType>
<xs:sequence>
<xs:element name="stor_id" type="xs:string" minOccurs="0" />
<xs:element name="ord_num" type="xs:string" minOccurs="0" />
<xs:element name="ord_date" type="xs:dateTime" minOccurs="0" />
<xs:element name="qty" type="xs:short" minOccurs="0" />
<xs:element name="payterms" type="xs:string" minOccurs="0" />
<xs:element name="title_id" type="xs:string" minOccurs="0" />
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="Stores">
<xs:complexType>
<xs:sequence>
<xs:element name="stor_id" type="xs:string" minOccurs="0" />
<xs:element name="stor_name" type="xs:string" minOccurs="0" />
<xs:element name="stor_address" type="xs:string" minOccurs="0" />
<xs:element name="city" type="xs:string" minOccurs="0" />
<xs:element name="state" type="xs:string" minOccurs="0" />
<xs:element name="zip" type="xs:string" minOccurs="0" />
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:choice>
</xs:complexType>
</xs:element>
</xs:schema>

DiffGrams

The enumerations for both XmlReadMode and XmlWriteMode refer to XML formatted as a DiffGram , but
we haven't yet discussed this format. A DiffGram is an XML format that not only contains the current values
of the data elements, but also contains the original values of rows that have been modified or deleted (since
the last call to AcceptChanges ). That is, a DiffGram is the serialization format that the DataSet uses to
transport its data to another process or computer. Because it is XML, it can also be used to pass data easily
to and from other platforms, such as UNIX or Linux.
A DiffGram is divided into three sections. The first section contains the current values, regardless of
whether they have been modified, of all of the rows in the DataSet . Any element (row) that has been
modified is indicated by the diffgr:hasChanges="modified"annotation and any added element (row) is
indicated by the diffgr:hasChanges="inserted" annotation. The second section contains the original values of
modified and deleted rows. These elements are linked to the corresponding elements in the first section by
the diffgr:id="xxx" annotation, where "xxx" is the specific row identifier. The third section contains error
information for specific rows.Here, too, the error elements are linked to the elements in the first section via
the diffgr:id="xxx" annotation.
You can generate a DiffGram XML file by adding code to the end of the btnWriteXML_Click subroutine of
Listing 10.1 to make some changes to the data in the DataSet and then writing the data as a DiffGram ,
as follows:

Private Sub btnWriteXML_Click(ByVal sender As System.Object,_


ByVal e As System.EventArgs) Handles btnWriteXML.Click
Dim dsSales As New DataSet()
Dim cn As New SqlConnection _
("data source=localhost;initial cata log=pubs;user id=sa")
Dim daAuthors As New SqlDataAdapter("select * from sales", cn)
Dim daPublishers As New SqlDataAdapter("select * from stores", cn)
' Load Relational data from database.
daAuthors.Fill (dsSales, "Sales")
daPublishers.Fill (dsSales, "Stores")
' Write XML out to file.
dsSales.WriteXml("..\StoreSales.xml")
' Write out schema as XSD file.
dsSales.WriteXmlSchema("..\StoreSales.xsd")
'Make same changesmodify, delete, and insert a new row
dsSales.Tables("Stores").Rows(0)("stor_id") = 999 ' Modify
dsSales.Tables("Stores").Rows(1).Delete() ' Delete
Dim rr As DataRow = dsSales.Tables("Stores").NewRow()
rr("stor_name") = "New Store"
dsSales.Tables("Stores").Rows.Add(rr) ' Insert
' Write out data as DiffGram.
dsSales.WriteXml("..\DiffGram.xml", XmlWriteMode.DiffGram)
End Sub

Listing 10.6 shows the contents of the file DiffGram.xml, which is produced by running the ADO-XML project

and clicking on the Write XML button. Because the changes were made to the Stores table, they appear at
the end of the file in boldface. The row deleted no longer appears in the section of current data, but appears
in the "before" section along with the original values of the modified row. The current data section also
contains the new row, marked as "inserted".
Listing 10.6 A DiffGram XML file with one inserted row, one deleted row, and one modified row

<?xml version="1.0" standalone="yes"?>


<diffgr:diffgram xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:diffgr="urn:
schemas-microsoft-com:xml-diffgram-v1">
<NewDataSet>
<Sales diffgr:id="Sales1" msdata:rowOrder="0">
<stor_id>6380</stor_id>
<ord_num>6871</ord_num>
<ord_date>1994-09-14T00:00:00.0000000+02:00</ord_date>
<qty>5</qty>
<payterms>Net 60</payterms>
<title_id>BU1032</title_id>
</Sales>
<Sales diffgr:id="Sales2" msdata:rowOrder="1">
<stor_id>6380</stor_id>
<ord_num>722a</ord_num>
<ord_date>1994-09-13T00:00: 00.0000000+02:00</ord_date>
<qty>3</qty>
<payterms>Net 60</payterms>
<title_id>PS2091</title_id>
</Sales>
<Sales diffgr:id="Sales3" msdata:rowOrder="2">
<stor_id>7066</stor_id>
<ord_num>A2976</ord_num>
<ord_date>1993-05-24T00:00:00.0000000+02:00</ord_date>
<qty>50</qty>
<payterms>Net 30</payterms>
<title_id>PC8888</title_id>
</Sales>
<Sales diffgr:id="Sales4" msdata:rowOrder="3">
<stor_id>7066</stor_id>
<ord_num>QA7442.3</ord_num>
<ord_date>1994-09-13T00:00: 00.0000000+02:00</ord_date>
<qty>75</qty>
<payterms>ON invoice</payterms>
<title_id>PS2091</title_id>
</Sales>
<Sales diffgr:id="Sales5" msdata:rowOrder="4">
<stor_id>7067</stor_id>
<ord_num>D4482</ord_num>
<ord_date>1994-09-14T00:00: 00.0000000+02:00</ord_date>
<qty>10</qty>
<payterms>Net 60</payterms>
<title_id>PS2091</title_id>

</Sales>
<Sales diffgr:id="Sales6" msdata:rowOrder="5">
<stor_id>7067</stor_id>
<ord_num>P2121</ord_num>
<ord_date>1992-06-15T00:00: 00.0000000+02:00</ord_date>
<qty>40</qty>
<payterms>Net 30</payterms>
<title_id>TC3218</title_id>
</Sales>
<Sales diffgr:id="Sales7" msdata:rowOrder="6">
<stor_id>7067</stor_id>
<ord_num>P2121</ord_num>
<ord_date>1992-06-15T00:00: 00.0000000+02:00</ord_date>
<qty>20</qty>
<payterms>Net 30</payterms>
<title_id>TC4203</title_id>
</Sales>
<Sales diffgr:id="Sales8" msdata:rowOrder="7">
<stor_id>7067</stor_id>
<ord_num>P2121</ord_num>
<ord_date>1992-06-15T00:00: 00.0000000+02:00</ord_date>
<qty>20</qty>
<payterms>Net 30</payterms>
<title_id>TC7777</title_id>
</Sales>
<Sales diffgr:id="Sales9" msdata:rowOrder="8">
<stor_id>7131</stor_id>
<ord_num>N914008</ord_num>
<ord_date>1994-09-14T00:00: 00.0000000+02:00</ord_date>
<qty>20</qty>
<payterms>Net 30</payterms>
<title_id>PS2091</title_id>
</Sales>
<Sales diffgr:id="Sales10" msdata:rowOrder="9">
<stor_id>7131</stor_id>
<ord_num>N914014</ord_num>
<ord_date>1994-09-14T00:00: 00.0000000+02:00</ord_date>
<qty>25</qty>
<payterms>Net 30</payterms>
<title_id>MC3021</title_id>
</Sales>
<Sales diffgr:id="Sales11" msdata:rowOrder="10">
<stor_id>7131</stor_id>
<ord_num>P3087a</ord_num>
<ord_date>1993-05-29T00:00: 00.0000000+02:00</ord_date>
<qty>20</qty>
<payterms>Net 60</payterms>
<title_id>PS1372</title_id>
</Sales>
<Sales diffgr:id="Sales12" msdata:rowOrder="11">
<stor_id>7131</stor_id>
<ord_num>P3087a</ord_num>

<ord_date>1993-05-29T00:00: 00.0000000+02:00</ord_date>
<qty>25</qty>
<payterms>Net 60</payterms>
<title_id>PS2106</title_id>
</Sales>
<Sales diffgr:id="Sales13" msdata:rowOrder="12">
<stor_id>7131</stor_id>
<ord_num>P3087a</ord_num>
<ord_date>1993-05-29T00:00: 00.0000000+02:00</ord_date>
<qty>15</qty>
<payterms>Net 60</payterms>
<title_id>PS3333</title_id>
</Sales>
<Sales diffgr:id="Sales14" msdata:rowOrder="13">
<stor_id>7131</stor_id>
<ord_num>P3087a</ord_num>
<ord_date>1993-05-29T00:00: 00.0000000+02:00</ord_date>
<qty>25</qty>
<payterms>Net 60</payterms>
<title_id>PS7777</title_id>
</Sales>
<Sales diffgr:id="Sales15" msdata:rowOrder="14">
<stor_id>7896</stor_id>
<ord_num>QQ2299</ord_num>
<ord_date>1993-10-28T00:00: 00.0000000+02:00</ord_date>
<qty>15</qty>
<payterms>Net 60</payterms>
<title_id>BU7832</title_id>
</Sales>
<Sales diffgr:id="Sales16" msdata:rowOrder="15">
<stor_id>7896</stor_id>
<ord_num>TQ456</ord_num>
<ord_date>1993-12-12T00:00: 00.0000000+02:00</ord_date>
<qty>10</qty>
<payterms>Net 60</payterms>
<title_id>MC2222</title_id>
</Sales>
<Sales diffgr:id="Sales17" msdata:rowOrder="16">
<stor_id>7896</stor_id>
<ord_num>X999</ord_num>
<ord_date>1993-02-21T00:00: 00.0000000+02:00</ord_date>
<qty>35</qty>
<payterms>ON invoice</payterms>
<title_id>BU2075</title_id>
</Sales>
<Sales diffgr:id="Sales18" msdata:rowOrder="17">
<stor_id>8042</stor_id>
<ord_num>423LL922</ord_num>
<ord_date>1994-09-14T00:00: 00.0000000+02:00</ord_date>
<qty>15</qty>
<payterms>ON invoice</payterms>
<title_id>MC3021</title_id>

</Sales>
<Sales diffgr:id="Sales19" msdata:rowOrder="18">
<stor_id>8042</stor_id>
<ord_num>423LL930</ord_num>
<ord_date>1994-09-14T00:00: 00.0000000+02:00</ord_date>
<qty>10</qty>
<payterms>ON invoice</payterms>
<title_id>BU1032</title_id>
</Sales>
<Sales diffgr:id="Sales20" msdata:rowOrder="19">
<stor_id>8042</stor_id>
<ord_num>P723</ord_num>
<ord_date>1993-03-11T00:00: 00.0000000+02:00</ord_date>
<qty>25</qty>
<payterms>Net 30</payterms>
<title_id>BU1111</title_id>
</Sales>
<Sales diffgr:id="Sales21" msdata:rowOrder="20">
<stor_id>8042</stor_id>
<ord_num>QA879.1</ord_num>
<ord_date>1993-05-22T00:00: 00.0000000+02:00</ord_date>
<qty>30</qty>
<payterms>Net 30</payterms>
<title_id>PC1035</title_id>
</Sales>
<Stores diffgr:id="Stores1" msdata:rowOrder="0" diffgr:hasChanges= "modified">
<stor_id>999</stor_id>
<stor_name>Eric the Read Books</stor_name>
<stor_address>788 Catamaugus Ave.</stor_address>
<city>Seattle</city>
<state>WA</state>
<zip>98056</zip>
</Stores>
<Stores diffgr:id="Stores3" msdata:rowOrder="2">
<stor_id>7067</stor_id>
<stor_name>News &amp; Brews</stor_name>
<stor_address>577 First St.</stor_address>
<city>Los Gatos</city>
<state>CA</state>
<zip>96745</zip>
</Stores>
<Stores diffgr:id="Stores4" msdata:rowOrder="3">
<stor_id>7131</stor_id>
<stor_name>Doc-U-Mat: Quality Laundry and Books</stor_name>
<stor_address>24-A Avogadro Way</stor_address>
<city>Remulade</city>
<state>WA</state>
<zip>98014</zip>
</Stores>
<Stores diffgr:id="Stores5" msdata:rowOrder="4">
<stor_id>7896</stor_id>
<stor_name>Fricative Bookshop</stor_name>

<stor_address>89 Madison St.</stor_address>


<city>Fremont</city>
<state>CA</state>
<zip>90019</zip>
</Stores>
<Stores diffgr:id="Stores6" msdata:rowOrder="5">
<stor_id>8042</stor_id>
<stor_name>Bookbeat</stor_name>
<stor_address>679 Carson St.</stor_address>
<city>Portland</city>
<state>OR</state>
<zip>89076</zip>
</Stores>
<Stores diffgr:id="Stores7" msdata:rowOrder="6" diffgr:hasChanges="inserted">
<stor_name>New Store</stor_name>
</Stores>
</NewDataSet>
<diffgr:before>
<Stores diffgr:id="Stores1" msdata:rowOrder="0">
<stor_id>6380</stor_id>
<stor_name>Eric the Read Books</stor_name>
<stor_address>788 Catamaugus Ave.</stor_address>
<city>Seattle</city>
<state>WA</state>
<zip>98056</zip>
</Stores>
<Stores diffgr:id="Stores2" msdata:rowOrder="1">
<stor_id>7066</stor_id>
<stor_name>Barnum's</stor_name>
<stor_address>567 Pasadena Ave.</stor_address>
<city>Tustin</city>
<state>CA</state>
<zip>92789</zip>
</Stores>
</diffgr:before>
</diffgr:diffgram>

Note
If you want a DiffGram that contains only the changed rows in the DataSet that have been
modified, you can first call the GetChanges method:

Dim ChangedDataSet = dsSales.GetChanges()


ChangedDataSet.WriteXml("..\Changes.xml", XmlWriteMode.DiffGram)

The resulting DiffGram file is shown in Listing 10.7 .

Listing 10.7 A DiffGram XML file, showing only changed rows

<?xml version="1.0" standalone="yes"?>


<diffgr:diffgram xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:diffgr="urn:
schemas-microsoft-com:xml-diffgram-v1">
<NewDataSet>
<Stores diffgr:id="Stores1" msdata:rowOrder="0" diffgr:hasChanges="modified">
<stor_id>999</stor_id>
<stor_name>Eric the Read Books</stor_name>
<stor_address>788 Catamaugus Ave.</stor_address>
<city>Seattle</city>
<state>WA</state>
<zip>98056</zip; mt
</Stores>
<Stores diffgr:id="Stores3" msdata:rowOrder="2" diffgr:hasChanges="inserted">
<stor_name>New Store</stor_name>
</Stores>
</NewDataSet>
<diffgr:before>
<Stores diffgr:id="Stores1" msdata:rowOrder="0">
<stor_id>6380</stor_id>
<stor_name>Eric the Read Books</stor_name>
<stor_address>788 Catamaugus Ave.</stor_address>
<city>Seattle</city>
<state>WA</state>
<zip>98056</zip; mt
</Stores>
<Stores diffgr:id="Stores2" msdata:rowOrder="1">
<stor_id>7066</stor_id>
<stor_name>Barnum's</stor_name>
<stor_address>567 Pasadena Ave.</stor_address>
<city>Tustin</city>
<state>CA</state>
<zip>92789</zip; mt
</Stores>
</diffgr:before>
</diffgr:diffgram>

Business Case 10.1: Preparing XML Files for Business Partners


The Jones Novelty company is increasingly working electronically with many of its business suppliers and
customers. This trend will require a more extensive solution several years down the road, in which case
something like Microsoft BizTalk Server would probably be appropriate. In the meantime, Brad Jones still
needs to meet his current demands for using XML to support electronic transactions and to "get his feet wet"
in this area. Jones is going to use some of the XML capabilities that we've just discussed, along with one or
two additional features, to meet these requirements. He can get pretty far, even without the "heavy-duty"
platforms and tools or the more advanced XML technologies such as XSLT.

First, Jones wants to send an XML file of the items he has in inventory. All the table columns except the
WholesalePrice column are to be sent; he currently isn't interested in sharing or exposing that information.
Although he could obviously get what he wants by creating a query hat includes all but that one column, he
chooses a technique involving the use of XML properties. The other requirements for this XML include an
inline XSD schema that describes the data and exposure of all the columns as elements, except for the ID
column, which is exposed as an attribute.
Building this application is very straightforward. Jones's database developer does the following:

1. Creates a new Visual Basic Windows Application project


2. Names the project BusinessCase10
3. Specifies a path for saving the project files
4. Enlarges the size of Form1
5. In the Properties window for Form1, sets its Name property to frmPrepareXML and its Text property to
Prepare XML
6. In the upper-left corner of the form, adds a button from the Windows Forms tab of the Toolbox
In the Properties window, she sets the Name property of the button to btnInventory and the Text property to
Create Inventory XML. As usual, she adds

Imports System
Imports System.Data
Imports System.Data.SqlClient

to the top of the file. Then she adds the following routine within the frmPrepareXML class definition:

Dim cn As New SqlConnection _


("data source=localhost;initial cata log=Novelty;user id=sa")
Private Sub btnInventory_Click(ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles btnInventory.Click
Dim dsInventory As New DataSet()
Dim daInventory As New SqlDataAdapter _
("select * from tblInventory ", cn)
daInventory.Fill(dsInventory, "tblInventory")
'Write ID column as XML attribute, rather then element
dsInventory.Tables("tblInventory").Columns("ID").ColumnMapping = _
MappingType.Attribute
' Hide the WholesalePrice column from the saved XML
dsInventory.Tables("tblInventory").Columns ("WholesalePrice").ColumnMapping =
MappingType.Hidden
' Write data as XML file, including inline schema.
dsInventory.WriteXml("..\Inventory.xml", XmlWriteMode.WriteSchema)

End Sub

After the DataSet has been filled with the data from the database, two code statements specify how to form
the XML. The first,

dsInventory.Tables("tblInventory").Columns("ID").ColumnMapping = MappingType.Attribute

specifies that the ID column should be saved as an XML attribute. The second,

dsInventory.Tables("tblInventory").Columns("WholesalePrice"). _
ColumnMapping = MappingType.Hidden

specifies that the WholeSale price column should be hidden and not written as part of the XML.
Finally, when the data has been written, the second parameter to the WriteXml method specifies that the
schema should be included along with the actual data. The resulting file is shown in Listing 10.8 .
Listing 10.8 The tblInventory table saved as an XML file

<?xml version="1.0" standalone="yes"?>


<NewDataSet>
<xs:schema id="NewDataSet" xmlns="'' xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:
msdata="urn:schemas-microsoft-com:xml-msdata">
<xs:element name="NewDataSet" msdata:IsDataSet="true">
<xs:complexType>
<xs:choice maxOccurs="unbounded">
<xs:element name="tblInventory">
<xs:complexType>
<xs:sequence>
<xs:element name="ProductName" type="xs:string" minOccurs="0" msdata:
Ordinal="1" />
<xs:element name="RetailPrice" type="xs:decimal" minOccurs="0" msdata:
Ordinal="3" />
<xs:element name="Description" type="xs:string" minOccurs="0" msdata:
Ordinal="4" />
</xs:sequence>
<xs:attribute name="ID" type="xs:int" />
<xs:attribute name="WholesalePrice" type="xs:decimal" use="prohibited" />
</xs:complexType>
</xs:element>
</xs:choice>
</xs:complexType>
</xs:element>
</xs:schema>
<tblInventory ID="1">
<ProductName>Rubber Chicken</ProductName>

<RetailPrice>2.99</RetailPrice>
<Description>The quintessential rubber chicken.</Description>
</tblInventory>
<tblInventory ID="2">
<ProductName>Joy Buzzer</ProductName>
<RetailPrice>9.99</RetailPrice>
<Description>They will get a real shock out of this.</Description>
</tblInventory>
<tblInventory ID="3">
<ProductName>Seltzer Bottle</ProductName>
<RetailPrice>15.24</RetailPrice>
<Description>Seltzer sold separately.</Description>
</tblInventory>
<tblInventory ID="4">
<ProductName>Ant Farm</ProductName>
<RetailPrice>14.99</RetailPrice>
<Description>Watch ants where they live and breed.</Description>
</tblInventory>
<tblInventory ID="5">
<ProductName>Wind-Up Robot</ProductName>
<RetailPrice>29.99</RetailPrice>
<Description>Giant robot:attack toybox!</Description>
</tblInventory>
<tblInventory ID="6">
<ProductName>Rubber Eyeballs</ProductName>
<RetailPrice>0.99</RetailPrice>
<Description>Peek-a-boo!</Description>
</tblInventory>
<tblInventory ID="7">
<ProductName>Doggy Mess</ProductName>
<RetailPrice>1.99</RetailPrice>
<Description>Yechhh!</Description>
</tblInventory>
<tblInventory ID="8">
<ProductName>Mini-Camera</ProductName>
<RetailPrice>9.99</RetailPrice>
<Description>For future spies!</Description>
</tblInventory>
<tblInventory ID="9">
<ProductName>Glow Worms</ProductName>
<RetailPrice>1.99</RetailPrice>
<Description>Makes them easy to find</Description>
</tblInventory>
<tblInventory ID="10">
<ProductName>Insect Pops</ProductName>
<RetailPrice>0.99</RetailPrice>
<Description>Special treats</Description>
</tblInventory>
<tblInventory ID="11">
<ProductName>Alien Alarm Clock</ProductName>
<RetailPrice>45.99</RetailPrice>
<Description>Do you know what time it is out there?</Description>

</tblInventory>
<tblInventory ID="12">
<ProductName>Cinnamon Toothpicks</ProductName>
<RetailPrice>1.99</RetailPrice>
<Description>Really wakes up your mouth</Description>
</tblInventory>
</NewDataSet>

The second issue that Jones has to deal with is the fact that the company handling the payroll for Jones
Novelty wants the basic employee information in XML format, organized by department. The database
developer adds a second button, btnEmployees, to frmPrepareXML , and adds the code shown in Listing
10.9 to the frmPrepareXML class.
Listing 10.9 Code to save data from both tblEmployee and tblDepartment as an XML file

Private Sub btnEmployees_Click(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles btnEmployees.Click
Dim dsEmployees As New DataSet()
Dim daEmployees As New SqlDataAdapter _
("select * from tblEmployee", cn)
Dim daDepartments As New SqlDataAdapter _
("select * from tblDepartment", cn)
daDepartments.Fill(dsEmployees, "tblDepartment")
daEmployees.Fill(dsEmployees, "tblEmployee")
' Define Relation between tables.
dsEmployees.Relations.Add("DepartmentEmployees", _
dsEmployees.Tables("tblDepartment").Columns("ID"), _
dsEmployees.Tables("tblEmployee").Columns ("DepartmentID"))
' Write data as XML file.
dsEmployees.WriteXml("..\Employees.xml")
End Sub

This code uses the default settings of the DataSet and saves the data from both tblEmployee and
tblDepartment. The XML produced is shown in Listing 10.10 .
Listing 10.10 XML produced to save the data from tblDepartment and tblEmployee

<?xml version="1.0" standalone="yes"?>


<NewDataSet>
<tblDepartment>
<ID>1</ID>
<DepartmentName>Administration</DepartmentName>
</tblDepartment>
<tblDepartment>
<ID>2</ID>

<DepartmentName>Engineering</DepartmentName>
</tblDepartment>
<tblDepartment>
<ID>3</ID>
<DepartmentName>Sales</DepartmentName>
</tblDepartment>
<tblDepartment>
<ID>4</ID>
<DepartmentName>Marketing</DepartmentName>
</tblDepartment>
<tblEmployee>
<ID>2032</ID>
<FirstName>Carole</FirstName>
<LastName>Vermeren</LastName>
<DepartmentID>2</DepartmentID>
<Salary>222</Salary>
</tblEmployee>
<tblEmployee>
<ID>2033</ID>
<FirstName>Cathy</FirstName>
<LastName>Johnson</LastName>
<DepartmentID>2</DepartmentID>
<Salary>13000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2034</ID>
<FirstName>Eric</FirstName>
<LastName>Haglund</LastName>
<DepartmentID>4</DepartmentID>
<Salary>12000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2035</ID>
<FirstName>Julie</FirstName>
<LastName>Ryan</LastName>
<DepartmentID>1</DepartmentID>
<Salary>4000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2036</ID>
<FirstName>Richard</FirstName>
<LastName>Halpin</LastName>
<DepartmentID>2</DepartmentID>
<Salary>10000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2037</ID>
<FirstName>Kathleen</FirstName>
<LastName>Johnson</LastName>
<DepartmentID>3</DepartmentID>
<Salary>18000</Salary>
</tblEmployee>

<tblEmployee>
<ID>2038</ID>
<FirstName>Sorel</FirstName>
<LastName>Polito</LastName>
<DepartmentID>4</DepartmentID>
<Salary>28000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2039</ID>
<FirstName>Sorel</FirstName>
<LastName>Terman</LastName>
<DepartmentID>1</DepartmentID>
<Salary>8000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2040</ID>
<FirstName>Randy</FirstName>
<LastName>Hobaica</LastName>
<DepartmentID>2</DepartmentID>
<Salary>18000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2041</ID>
<FirstName>Matthew</FirstName>
<LastName>Haglund</LastName>
<DepartmentID>3</DepartmentID>
<Salary>30000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2042</ID>
<FirstName>Cathy</FirstName>
<LastName>Vermeren</LastName>
<DepartmentID>4</DepartmentID>
<Salary>0</Salary>
</tblEmployee>
<tblEmployee>
<ID>2043</ID>
<FirstName>Brad</FirstName>
<LastName>Townsend</LastName>
<DepartmentID>2</DepartmentID>
<Salary>12000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2044</ID>
<FirstName>Jennifer</FirstName>
<LastName>Eves</LastName>
<DepartmentID>2</DepartmentID>
<Salary>26000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2045</ID>
<FirstName>Steve</FirstName>

<LastName>Marshall</LastName>
<DepartmentID>3</DepartmentID>
<Salary>42000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2046</ID>
<FirstName>Laura</FirstName>
<LastName>Davidson</LastName>
<DepartmentID>4</DepartmentID>
<Salary>60000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2047</ID>
<FirstName>Angela</FirstName>
<LastName>Stefanac</LastName>
<DepartmentID>2</DepartmentID>
<Salary>16000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2048</ID>
<FirstName>Marjorie</FirstName>
<LastName>Bassett</LastName>
<DepartmentID>2</DepartmentID>
<Salary>34000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2049</ID>
<FirstName>Joe</FirstName>
<LastName>Chideya</LastName>
<DepartmentID>3</DepartmentID>
<Salary>54000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2050</ID>
<FirstName>Katie</FirstName>
<LastName>Chideya</LastName>
<DepartmentID>4</DepartmentID>
<Salary>76000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2051</ID>
<FirstName>Terri</FirstName>
<LastName>Allen</LastName>
<DepartmentID>1</DepartmentID>
<Salary>20000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2052</ID>
<FirstName>Mike</FirstName>
<LastName>Doberstein</LastName>
<DepartmentID>2</DepartmentID>
<Salary>42000</Salary>

</tblEmployee>
<tblEmployee>
<ID>2053</ID>
<FirstName>Terri</FirstName>
<LastName>Woodruff</LastName>
<DepartmentID>3</DepartmentID>
<Salary>66000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2054</ID>
<FirstName>Cathy</FirstName>
<LastName>Rosenthal</LastName>
<DepartmentID>4</DepartmentID>
<Salary>5555</Salary>
</tblEmployee>
<tblEmployee>
<ID>2055</ID>
<FirstName>Margaret</FirstName>
<LastName>Eves</LastName>
<DepartmentID>1</DepartmentID>
<Salary>24000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2056</ID>
<FirstName>Mikki</FirstName>
<LastName>Lemay</LastName>
<DepartmentID>2</DepartmentID>
<Salary>50000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2057</ID>
<FirstName>Randy</FirstName>
<LastName>Nelson</LastName>
<DepartmentID>3</DepartmentID>
<Salary>78000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2058</ID>
<FirstName>Kathleen</FirstName>
<LastName>Husbands</LastName>
<DepartmentID>4</DepartmentID>
<Salary>108000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2059</ID>
<FirstName>Kathleen</FirstName>
<LastName>Eberman</LastName>
<DepartmentID>1</DepartmentID>
<Salary>28000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2060</ID>

<FirstName>Richard</FirstName>
<LastName>Rosenthal</LastName>
<DepartmentID>2</DepartmentID>
<Salary>58000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2061</ID>
<FirstName>Mike</FirstName>
<LastName>Woodruff</LastName>
<DepartmentID>3</DepartmentID>
<Salary>90000</Salary>
</tblEmployee>
</NewDataSet>

Unfortunately, this XML isn't really what the payroll vendor wants. Even though the database developer has
created a Relation to link the parent table (tblDepartment) to the child table (tblEmployee), the XML
produced still lists the data from the two tables separately. To nest the child elements within the parent
elements, she needs to set the Relation 's Nested property to True:

dsEmployees.Relations("DepartmentEmployees").Nested = True

If she adds the preceding line before writing the XML, she gets the results shown in Listing 10.11 , which is
what the payroll vendor really wants.
Listing 10.11 XML file with the tblEmployee data nested within the tblDepartment data>

<?xml version="1.0" standalone="yes"?>


<NewDataSet>
<tblDepartment>
<ID>1</ID>
<DepartmentName>Administration</DepartmentName>
<tblEmployee>
<ID>2035</ID>
<FirstName>Julie</FirstName>
<LastName>Ryan</LastName>
<DepartmentID>1</DepartmentID>
<Salary>4000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2039</ID>
<FirstName>Sorel</FirstName>
<LastName>Terman</LastName>
<DepartmentID>1</DepartmentID>
<Salary>8000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2051</ID>
<FirstName>Terri</FirstName>

<LastName>Allen</LastName>
<DepartmentID>1</DepartmentID>
<Salary>20000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2055</ID>
<FirstName>Margaret</FirstName>
<LastName>Eves</LastName>
<DepartmentID>1</DepartmentID>
<Salary>24000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2059</ID>
<FirstName>Kathleen</FirstName>
<LastName>Eberman</LastName>
<DepartmentID>1</DepartmentID>
<Salary>28000</Salary>
</tblEmployee>
</tblDepartment>
<tblDepartment>
<ID>2</ID>
<DepartmentName>Engineering</DepartmentName>
<tblEmployee>
<ID>2032</ID>
<FirstName>Carole</FirstName>
<LastName>Vermeren</LastName>
<DepartmentID>2</DepartmentID>
<Salary>222</Salary>
</tblEmployee>
<tblEmployee>
<ID>2033</ID>
<FirstName>Cathy</FirstName>
<LastName>Johnson</LastName>
<DepartmentID>2</DepartmentID>
<Salary>13000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2036</ID>
<FirstName>Richard</FirstName>
<LastName>Halpin</LastName>
<DepartmentID>2</DepartmentID>
<Salary>10000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2040</ID>
<FirstName>Randy</FirstName>
<LastName>Hobaica</LastName>
<DepartmentID>2</DepartmentID>
<Salary>18000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2043</ID>

<FirstName>Brad</FirstName>
<LastName>Townsend</LastName>
<DepartmentID>2</DepartmentID>
<Salary>12000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2044</ID>
<FirstName>Jennifer</FirstName>
<LastName>Eves</LastName>
<DepartmentID>2</DepartmentID>
<Salary>26000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2047</ID>
<FirstName>Angela</FirstName>
<LastName>Stefanac</LastName>
<DepartmentID>2</DepartmentID>
<Salary>16000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2048</ID>
<FirstName>Marjorie</FirstName>
<LastName>Bassett</LastName>
<DepartmentID>2</DepartmentID>
<Salary>34000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2052</ID>
<FirstName>Mike</FirstName>
<LastName>Doberstein</LastName>
<DepartmentID>2</DepartmentID>
<Salary>42000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2056</ID>
<FirstName>Mikki</FirstName>
<LastName>Lemay</LastName>
<DepartmentID>2</DepartmentID>
<Salary>50000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2060</ID>
<FirstName>Richard</FirstName>
<LastName>Rosenthal</LastName>
<DepartmentID>2</DepartmentID>
<Salary>58000</Salary>
</tblEmployee>
</tblDepartment>
<tblDepartment>
<ID>3</ID>
<DepartmentName>Sales</DepartmentName>
<tblEmployee>

<ID>2037</ID>
<FirstName>Kathleen</FirstName>
<LastName>Johnson</LastName>
<DepartmentID>3</DepartmentID>
<Salary>18000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2041</ID>
<FirstName>Matthew</FirstName>
<LastName>Haglund</LastName>
<DepartmentID>3</DepartmentID>
<Salary>30000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2045</ID>
<FirstName>Steve</FirstName>
<LastName>Marshall</LastName>
<DepartmentID>3</DepartmentID>
<Salary>42000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2049</ID>
<FirstName>Joe</FirstName>
<LastName>Chideya</LastName>
<DepartmentID>3</DepartmentID>
<Salary>54000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2053</ID>
<FirstName>Terri</FirstName>
<LastName>Woodruff</LastName>
<DepartmentID>3</DepartmentID>
<Salary>66000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2057</ID>
<FirstName>Randy</FirstName>
<LastName>Nelson</LastName>
<DepartmentID>3</DepartmentID>
<Salary>78000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2061</ID>
<FirstName>Mike</FirstName>
<LastName>Woodruff</LastName>
<DepartmentID>3</DepartmentID>
<Salary>90000</Salary>
</tblEmployee>
</tblDepartment>
<tblDepartment>
<ID>4</ID>
<DepartmentName>Marketing</DepartmentName>

<tblEmployee>
<ID>2034</ID>
<FirstName>Eric</FirstName>
<LastName>Haglund</LastName>
<DepartmentID>4</DepartmentID>
<Salary>12000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2038</ID>
<FirstName>Sorel</FirstName>
<LastName>Polito</LastName>
<DepartmentID>4</DepartmentID>
<Salary>28000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2042</ID>
<FirstName>Cathy</FirstName>
<LastName>Vermeren</LastName>
<DepartmentID>4</DepartmentID>
<Salary>0</Salary>
</tblEmployee>
<tblEmployee>
<ID>2046</ID>
<FirstName>Laura</FirstName>
<LastName>Davidson</LastName>
<DepartmentID>4</DepartmentID>
<Salary>60000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2050</ID>
<FirstName>Katie</FirstName>
<LastName>Chideya</LastName>
<DepartmentID>4</DepartmentID>
<Salary>76000</Salary>
</tblEmployee>
<tblEmployee>
<ID>2054</ID>
<FirstName>Cathy</FirstName>
<LastName>Rosenthal</LastName>
<DepartmentID>4</DepartmentID>
<Salary>5555</Salary>
</tblEmployee>
<tblEmployee>
<ID>2058</ID>
<FirstName>Kathleen</FirstName>
<LastName>Husbands</LastName>
<DepartmentID>4</DepartmentID>
<Salary>108000</Salary>
</tblEmployee>
</tblDepartment>
</NewDataSet>

[ Team LiB ]

[ Team LiB ]

Creating an XmlReader from a Command Object


In Chapter 4 we covered the Command object, which is a core object of a .NET Data Provider. We discussed
its methods for executing commandsnamely, the ExecuteReader, ExecuteScalar, and
ExecuteNonQuery methods. We mentioned that, although all .NET Data Providers implement these
methods, the SqlCommand object supports an additional method, ExecuteXmlReader, which is used for
directly retrieving and accessing XML data from Microsoft SQL Server.
The ExecuteXmlReader method returns an XmlReader, much in the same way that the ExecuteReader
method returns a DataReader. Once you've become familiar with use of the XmlReader object, you can
obtain one by executing the ExecuteXmlReader method.
Returning to the ADO-XML project, we do the following:

1. Add an additional button immediately below the btnWriteXML button from the Windows Forms tab of
the Toolbox
2. In the Properties window, set the Name property of the button to btnExecuteXML and set the Text
property to ExecuteXMLReader
3. Add the code shown in Listing 10.12 to the frmXML class.
Listing 10.12 Retrieving and handling data from SQL Server in XML format

Private Sub btnExecuteXML_Click(ByVal sender As System.Object, ByVal e As


System.EventArgs) Handles btnExecuteXML.Click
Dim cn As New SqlConnection _
("data source=localhost;initial cata log=pubs;user id=sa")
Dim cmd As New SqlCommand _
("select * from stores for xml auto, elements", cn)
Dim reader As Xml.XmlReader
Dim str As New System.Text.StringBuilder()
cn.Open()
' Execute SQL Select command, with FOR XML clause.
reader = cmd.ExecuteXmlReader()
' Find and retrieve data from element nodes.
While reader.Read()
Select Case reader.NodeType
Case Xml.XmlNodeType.Element
str.Append("<" & reader.Name &">")
Case Xml.XmlNodeType.EndElement
str.Append("</" & reader.Name &">" & ControlChars.CrLf)
Case Xml.XmlNodeType.Text

str.Append(reader.Value)
Case Else
' ignore in this example.
End Select
End While
MsgBox(str.ToString)
cn.Close()
End Sub

The code in Listing 10.12 shows a simplified use of the ExecuteXmlReader method. All it does is display the
data (including column tags) contained in the stores table of the pubs database. The SQL Select command
sent to the SQL Server specifies explicitly that the columns are to be returned as XML elements:

"select * from stores for xml auto, elements"

Therefore we can simplify the handling of the various XML node types and just look at element begin and
end nodes and the text nodes that contain the actual data. A more robust handling of an XML document
would cover all possible node types in the Select-Case statement. The results of clicking on the
btnExecuteXML button and executing the code in Listing 10.9 are shown in Figure 10.2.
Figure 10.2. Message box with XML data retrieved directly from SQL Server

[ Team LiB ]

[ Team LiB ]

The XmlDataDocument Object


In Chapter 9 we discussed the XmlDocument object and how it is used to access the hierarchical data of the
nodes of an XML document loaded into memory. We have also discussed throughout this book how to
retrieve and access relational data from a (traditional) SQL database. On the one hand, what if your data
comes from an XML source, but you want (or only know how) to navigate and manipulate it by using rows
and relational techniques? On the other hand, what if your data comes from an SQL database and you want
to navigate and manipulate it by using nodes and XML techniques?
The answer to these questions is the XmlDataDocument class. It is derived from the XmlDocument class but
adds an important dimension. While maintaining a single internal copy of data, it provides both XML node
access, as the XmlDocument class, and relational access via a DataSet property. The XmlDataDocument
automatically synchronizes the two views (or access methods), so that any change via one technique or
technology is immediately accessible via the other. This approach allows the mixing and matching of data
sources and access techniques.
Continuing with the ADO-XML project, let's look at two scenarios.

1. Add two additional buttons immediately below the btnExecuteXML button from the Windows Forms tab
of the Toolbox.
2. In the Properties window, set the Name property of the first button to btnNavigateSQL and set the Text
property to Navigate SQL.
3. In the Properties window, set the Name property of the second button to btnAddRows and set the Text
property to "Add rows to XML".
4. Add the XPath namespace to the end of the list of Import statements at the top of the file:

Imports System.Xml.XPath

5. Add the following two subroutines to the frmXML class:

Private Sub btnNavigateSQL_Click(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles _
btnNavigateSQL.Click
Dim cn As New SqlConnection _
("data source=localhost;initial cata log=pubs;user id=sa")
Dim da As New SqlDataAdapter("Select * from authors", cn)
Dim ds As New DataSet()
' Fill DataSet with data from relational database.
da.Fill(ds, "authors")
' Create a XmlDataDocument based on the existing DataSet
Dim xmlDoc As New Xml.XmlDataDocument(ds)
' Get a Navigator for the XmlDataDocument
Dim xmlNav As XPathNavigator = xmlDoc.CreateNavigator()

' Get all the Author last names from California (state = CA)
Dim xIterator As XPathNodeIterator
xIterator = _
xmlNav.Select("//authors[state='CA']/au_lname')
' Iterate over all of the selected nodes and
' display the author last Names.
Dim str As New System.Text.StringBuilder()
While (xIterator.MoveNext())
str.Append(xIterator.Current.Value & ControlChars.CrLf)
End While
MsgBox(str.ToString)
End Sub
Private Sub btnAddRows_Click(ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles btnAddRows.Click
Dim dsPubs As New DataSet()
' Read in XML from file.
dsPubs.ReadXml("..\Pubs.xml")
' NOWadd a new row.
Dim row As DataRow = dsPubs.Tables("Publishers").NewRow()
row("pub_name") = "Newbie Publishing Corp."
row("city") = "New York"
row("state") = "NY"
row("Country") = "USA"
dsPubs.Tables("Publishers").Rows.Add(row)
' Bind DataSet to Data Grid to see new data.
grdData.DataMember = "publishers"
grdData.DataSource = dsPubs
End Sub

The subroutine btnNavigateSQL_Click reads in data from a SQL Server database and then navigates a subset
of the records via an XPATH query to iterate over the selected data. The key lines in this routine are

Dim xmlDoc As New Xml.XmlDataDocument(ds)


' Get a Navigator for the XmlDataDocument.
Dim xmlNav As XPathNavigator = xmlDoc.CreateNavigator()
' Get all the Author last names from California (state = CA).
Dim xIterator As XPathNodeIterator
xIterator = _
xmlNav.Select("//authors[state='CA']/au_lname')

First, the filled DataSet is associated with a new XmlDataDocument. An XPathNavigator is created on

this XmlDataDocument, allowing us to create an XPathNodeIterator . An XPATH query string is passed to


the Select method, where the query is defined to return the author last name (au_lname) field, for all the
author nodes where the state is "CA". The routine then iterates across all the nodes selected by the query,
building a string that contains the author last names. This string is then displayed in a message box, as
shown in Figure 10.3.
Figure 10.3. Message box with XML data retrieved directly from a DataSet

The second subroutine, btnAddRows_Clicks, goes the other way. It first executes ReadXml to read XML data
from the file publishers.xml into the dsPubs DataSet , as in Listing 10.1. This method automatically creates a
publishers table in the DataSet . The subroutine then adds new data via relational techniques and objects
such as the DataRow. A new DataRow is created with the schema of the publishers table, and the row
columns are assigned values. The new row is added to the publishers table and the result is displayed in a
DataGrid, as shown in Figure 10.4.
Figure 10.4. DataGrid with data from XML file and added row

[ Team LiB ]

[ Team LiB ]

Summary
In this chapter we showed the strong relationship and integration of data access and XML in the .NET
framework. Specifically, the ADO.NET DataSet supports the reading and writing of both XML and XML
schemas. In the absence of an XML schema definition, the DataSet can infer the schema information from
data in an XML document that it reads.
From the XML side, the XmlDataDocument object provides the bridge between the relational world and the
hierarchical XML world. The data within the XmlDataDocument can be accessed either as XML or as
relational data. Changes in one view automatically appear in the other view. As a database developer, you
decide the best way to access and manipulate your data at any time.
Both relational database access and XML data manipulation are broad topics, with many books dedicated to
the technologies and tools of each. We couldn't cover anywhere near all these topics in this chapteror even
in this book. The important thing to remember is that ADO.NET and the .NET XML objects have been
designed to integrate and cooperate fully with each other.

Questions and Answers

Q1:

Sometimes the .NET documentation refers to the XDR schema format, in addition to the
standard XSD format. What is the XDR format?

A1:

The XSD format is the XML Schema Definition format that is a recognized standard of the World
Wide Web Consortium (W3C), the organization that manages Web-related standards
(www.w3c.org). While waiting for the XSD standard, Microsoft worked with a preliminary subset,
called XDR (XML-Data Reduced), that it developed for defining XML schemas. Once the XSD
standard was finalized in May 2001, Microsoft fully embraced and supported that standard.
Although schemas that are written by the .NET Framework are saved only in the XSD format,
schemas can be read in either the XSD or XDR format, thereby maintaining compatibility with
older systems. The .NET Framework SDK also includes a tool, xsd.exe, that can (among other
things) convert a schema from XDR to XSD format.

Q2:

Can I find out exactly how ADO.NET automatically infers a schema from an XML
document?

A2:

The algorithm is actually well documented on MSDN in the Visual Studio help files. Starting with
the help topic Inferring DataSet Relational Structure from XML, you can find a summary of the
schema inference process, along with specifics of the processes for tables, columns, and
relationships. The limitations of the inference process are also described.

Q3:

This chapter has shown how I can access the same set of data either relationally via
the DataSet or hierarchically via the XmlDataDocument. Which is the right approach?

A3:

In brief, there is no "right" approach. It depends on where your data is coming from and what
you want to do with it. The .NET Framework provides the flexibility to fetch data from wherever it
iseither a relational (tabular) data source or a hierarchical data source. The point is that "Data

is XML is data." Regardless of the source, you can manipulate and/or save data as either
relational or XML data, depending on your needs. An additional, nontechnical consideration might
be your level of experience and knowledge. Even if both the data source and the resulting data
need to be XML, if you aren't (yet) familiar enough with XML tools and technologies, you can still
complete the task of manipulating the data by using tabular (DataSet and DataTable)
techniques.

[ Team LiB ]

[ Team LiB ]

Chapter 11. WebForms: Database Applications with ASP.NET


IN THIS CHAPTER

An Overview of ASP.NET
Accessing a Database Through ASP.NET
Improving the Performance of ASP.NET Database Applications Through Stored Procedures
So far, we've shown you how to work with various aspects of database access and their results (such as
DataSets, DataAdapters , and Connections). In this chapter we put these items together to provide
information contained in a database to the Web via a browser. In common .NET terminology, a Web page
that collects, submits, or displays data-driven content is known as a WebForm. If you ever wrote an
application in ASP, you should fondly remember the ADODB.Connection and ADODB.Recordset objects as
being about the only ways to get data from a relational database. With the advent of .NET, not only does
ADO.NET give you new classes and options, but it also provides built-in support for working with XML as if it
were a database. Without further ado (pun intended), let's take a look at this capability from the most basic
means of access to the more advanced; then, we can concentrate on making it scale.

[ Team LiB ]

[ Team LiB ]

An Overview of ASP.NET
Since the advent of using server-side code to produce results on a Web page, developers have looked at
ways of making it easier, both for the end user and the developer. Consider the following business case:
Your current Web site allows users to access it, get basic contact information, and call you if they want to
order anything. (Not all that long ago, this limited capability was what most e-commerce sites offered.) The
problem is, you want to allow users at least to see what you have to sell; based on recent business, that
could be a lot of users.
To allow for this kind of interaction with such a Web site, many different scripting languages and
technologies began to emerge in the early days of e-commerce. One of these technologies was Microsoft's
Active Server Pages (ASP). It provided a means by which code that was embedded within a Web page could
be executed as the Web server processed the request. From a developer's point of view, this capability was
revolutionary; at last, the developer could enhance the user's experience by using VBScript that executed on
the server. This type of interaction began to overtake the approach that had everything happening in the
browser (as with JavaScript and ActiveX). In addition, VBScript expanded to the point where it could call
compiled COM objects on the Web server and incorporate their functionality in a Web page. From that stage,
Microsoft Transaction Server, now known as Component Services, emerged.
So what happens when you mix the best parts of server-side coding, client-side scripting, and code
compilation for security and scalability? You get ASP.NET. The resulting ASP.NET, or ASPX, allows for the use
of rich server-based coding in the full language that a developer is used to. Right now, support for C#, Visual
Basic, C++, FoxPro, Perl, COBOL, and FORTRAN have all been made available to the Common Language
Runtime (CLR) and can be used in special Web pages to enhanced the user's experience.
In addition, all ASPX files are compiled and cached. That is, once the page has been requested, it is compiled
into a temporary .dll file that actually does all the work on future requests. The results are faster
performance and less latency.

HTML Controls Versus Server Controls


Another new concept presented in ASP.NET is that of the server control. A server control can be coded into a
Web page, yet every event and property associated with that control is executed at the Web server. Server
controls are analogous to HTML controls but have several advantages.
Any developer who has ever created a Web page that contains a form has seen an HTML control. Starting
with the FORM element, different types of INPUT elements (such as text, select, and checkboxes) are
created. A simple example is

<input type="text" name="txtName" value="">

These INPUT elements are still quite available in ASP.NET, but any attempt to validate their contents
requires either client-side scripting or server-side processing. The problem in our fictitious business case is
that we don't want to create browser incompatibilities. With client-side script, there is always a chance that
either the browser doesn't support the script or that scripting has been disabled in the user's browser for

security reasons. So how can we control this environment without using client-side scripting? Server controls
are the answer.
Another interesting point of contrast to server controls is ActiveX. Its controls attempted to bring fat-client
technology to the Web browser. For more reasons than we can list here, the use of ActiveX controls
embedded in Web pages served over the Internet was a bad idea, and the use of such technology has
waned. Enter the server control. A server control contains no direct code that executes on the client; rather,
the client receives only HTML, and all interactions are processed by "posting back" the information to the
Web server. The DataGrid, which we cover later in this chapter, is an excellent example of a server control,
as the control simply takes a dataset , processes it, and produces an HTML table that can be viewed in any
browser. Yet it doesn't look like the standard HTML table with a lot of data in it.

Note
Server controls can be created by using the Visual Studio IDE.

To complete our comparison of HTML and server controls, we use the following ASP.NET code snippet to
represent how a server-side textbox control may look:

<asp:TextBox id="txtName" runat="server" text=""></asp:TextBox>

The intent of this server control is exactly the same as the HTML control snippet shown earlier. The only
difference is that this textbox is processed at the server.

Additional Highlights of ASP.NET


Entire books have been written on just the new functionality within ASP.NET and how to use it. What follows
are a couple of items that, in addition to server-side controls, have become the most used real-world
approaches to date.
The first item is the IsPostBack functionality. When an ASPX filealso known as a WebFormis created, the
default behavior is for the page to submit any data on the form to itself. Because you don't want a user to
get error messages for missing data values on a page she hasn't seen yet, the IsPostBack option comes into
play. IsPostBack tells you whether the page has been submitted to itself or is being requested for the first
time. The following code snippet shows how simple this option is to use:

If Page.IsPostBack() Then
'Handle the information
End If

Of course, you can still have an ASPX or HTML file that also submits to another ASPX file. This option allows
you to maintain a single file that contains the desired functionality. If you want the user to be sent to
another page once all the data has been validated and corrected, Response.Redirect still works great.

Another noteworthy feature of ASP.NET is the means by which you can deploy an application. Basically, you
copy all the files from one machine to another. Of course, there are a few other things to consider. The first
is that the new virtual directory in Internet Information Server (IIS) must be configured as an application. To
do that, open the IIS MMC applet, right-click on the desired directory and select Properties, which displays
the Properties dialog; then select the Virtual Directory or the Home Directory tab, depending on your
operating system. In the Application Settings area, if the virtual directory isn't configured as an application,
you'll see a Create button. If the virtual directory is already configured as an application, don't change
anythingjust click on Cancel to close the dialog. After you click on Create, click on OK to close the dialog.
Your virtual directory is now configured as an application.
Any server to which you copy an application need not have Visual Studio installed. Only the .NET Framework,
which is downloadable for free from Microsoft's Web site, http://www.microsoft.com, is required.

[ Team LiB ]

[ Team LiB ]

Accessing a Database Through ASP.NET


At the heart of any database application is, of course, a database. To use a database, you must first have a
way to connect to it reliably and somewhat securely. This capability is provided through the System.Data
namespace and is usually a simple string.
A number of difficulties are associated with accessing a database. One difficulty that you may not be aware
of is that the ASPNET user doesn't have rights to do anything on the database, including even the ability to
execute a SELECT on a table. Another frequently encountered problem is the storage of names and
passwords of system administrator-level users in plain-text in a file that typically the Web server would not
display to outside users. For example, it isn't uncommon in viewing the source of an ASP page to find code
such as

Set Conn = Server.CreateObject("ADODB.Connection")


Conn.Open("server=myServerName;uid=sa;pwd=")

For the examples presented in this chapter, the TRUSTED_CONNECTION = YES option is used. Using a
trusted connection means that the user is authenticated to Windows and that authenticated account
information also exists within SQL Server. In most real-world situations, you wouldn't add the ASPNET
account to SQL Server unless you were going to restrict the functionality provided to this user to SELECT
only.

Note
The ASPNET user is the default identity used by anonymous Web requests performed against
Internet Information Server when the .NET Runtime is installed.

Adding the ASPNET User to the SQL Server Logins


Again, to make the TRUSTED_CONNECTION option work, a Windows account must exist, and that account
must be added to the SQL Server logins. Also, the SQL Server must be set up to allow Windows
Authentication. Even though we covered most of this process in a previous chapter, we present it here to
show how a direct relation to managing the ASPNET account within SQL Server allows for the
TRUSTED_CONNECTION option to be used. We now demonstrate how to do so for the rare case mentioned
previously, where we let the ASPNET user execute SELECT statements against the Novelty database.
1. Open SQL Server Enterprise Manager and navigate to the Security node and expand it, as shown in
Figure 11.1
Figure 11.1. SQL Server Enterprise ManagerSecurity node

2. Right-click on Logins and select New Login. A dialog similar to that shown in Figure 11.2 appears.
Figure 11.2. Login dialog

3.

3. On the General tab, at the top of the login dialog is a textbox for inserting the user. At this point,
either click on the ' ' button to browse for a user or enter the user's name in the format of
machinename\username or domain\username. Use of the browse functionality is shown in Figure 11.3 .
Figure 11.3. General tabSelecting a user

4. Scroll down the list to find the ASPNET user entry and double-click on it. Then click on OK on the
browse dialog.
5. At the bottom of the General tab, click on the drop-down list labeled "Database" and change it to
Novelty, as shown in Figure 11.4
Figure 11.4. General tabSelecting a database

6. Next click on the Database Access tab. Click on the checkbox beside the Novelty database only. Also,
under Permit in Database Role at the bottom of the dialog, select public. These steps are shown in
Figure 11.5
Figure 11.5. Database Access tabSetting a database and role

7. Click on the public role and click on the Properties button located beside the Permit in Database Role
list. A dialog similar to that shown in Figure 11.6 is presented.
Figure 11.6. Properties for public role

8. Click on the Permissions button at the top right of the dialog. Doing so opens the low-level
permissions that we're going to modify. Figure 11.7 shows the resulting dialog.
Figure 11.7. Permissions dialog for public role

9. Be sure that the option List all objects is selected and scroll to the bottom of the list, where the tables
from the Novelty database will show up. For each table listed, check the box in the SELECT column, as
shown in Figure 11.8 . Click on it only once to allow the user to perform a SELECT; clicking on it twice
causes a red X to appear, indicating that access to that table is explicitly denied.
Figure 11.8. Table-level permissions dialog

10. At this point, you should also put a check in the Insert column for the tblOrders table, as shown in
Figure 11.9 . You wouldn't want potential customers not to be able to buy anything.
Figure 11.9. Setting INSERT on tblOrders

11. For each open dialog window, simply click on OK. There should be about three of them. Now the
ASPNET user will appear in the Logins windows of SQL Server Enterprise Manager, as shown in Figure
11.10 .
Figure 11.10. SQL Server Enterprise ManagerLogins window

At this point, you have successfully added the ASPNET user to the SQL Server logins. In the next section, we
use this account to connect to the database, using a connection string that doesn't expose any login
information.

TRUSTED_CONNECTION in Action
We now show how to use TRUSTED_CONNECTION to connect to the database and execute a simple query
through ASP.NET. The setup for this example is easy: Create a new VB.NET ASP.NET WebForms Application
and name it Novelty.
The first thing to do is to rename the default WebForm1.aspx file as default.aspx by simply right-clicking on
the WebForm1.aspx file and selecting Rename. Then, in the highlighted area, type "default.aspx" to make it
the default page for the directory. (We will make many changes to this page as we experiment through the
rest of this chapter.)
For starters, though, look at the code in Listing 11.1 . It is a very basic example of connecting to the
database with the ASPNET user account, executing a simple SELECT query, and then displaying the results
on a Web page.
Listing 11.1 default.aspx.vb

Imports System.Data
Imports System.Data.SqlClient
Public Class WebForm1
Inherits System.Web.UI.Page
#Region "Web Form Designer Generated Code"

'This call is required by the Web Form Designer.


<System.Diagnostics.DebuggerStepThrough()> Private Sub_InitializeComponent()
End Sub
Private Sub Page_Init(ByVal sender As System.Object,_ByVal e As System.EventArgs)
_Handles MyBase.Init
'CODEGEN: This method call is required by the Web 'Form Designer.
'Do not modify it using the code editor.
InitializeComponent()
End Sub
#End Region
Dim connString As String
Private Sub Page_Load(ByVal sender As System.Object, ByVal e As
System.EventArgs) Handles MyBase.Load
'Set the connection string
connString = "server=(local);database=Novelty; TRUSTED_CONNECTION=Yes"
'This is all the information we need
'to connect to the database. Also, if
'someone were to ever find this file
'in a raw format, he could not use the
'information to attempt to login to the database.
'Intern the string. This causes .NET
'to check to see if the string exists
'in the memory heap. If it does not, it
'will be put there. Otherwise, it uses
'the instance already in memory instead
'of creating another one.
String.Intern(connString)
ShowCustomers()
End Sub
Private Sub ShowCustomers()
'This is just a simple function to get
'things started, showing the collection
'and displaying tblCustomer.
'Initialize the connection object with
'the connection string.
Dim conn As New SqlConnection(connString)
'Also, initialize the command object with
'the SQL to be executed.
Dim cmd As New SqlCommand("SELECT * FROM tblCustomer", conn)
conn.Open()
Dim dReader As SqlDataReader =cmd.ExecuteReader (CommandBehavior.CloseConnection)
While dReader.Read
Response.Write(dReader.GetString(1))
Response.Write("&nbsp;" & dReader.GetString(2))
Response.Write("<BR>")
End While
dReader.Close()

conn.Close()
End Sub
End Class

Note that in Listing 11.1 the System.Data and System.Data.SqlClient namespaces are included. These
two namespaces provide the classes and functionality needed to connect and query the database. Also note
that Listing 11.1 is a code-behind page. That is, the actual file, default.aspx, has no real code to speak of;
it's just there to present what we tell it to from the code-behind page. We illustrate this concept in Listing
11.2 , which shows the entire code from the file default.aspx.
Listing 11.2 default.aspx

<%@ Page Language="vb" AutoEventWireup="false" Codebehind="default.aspx.vb"_


Inherits="Novelty.WebForm1" %>
<!DOCTYPE HTML PUBLIC "_//W3C//DTD HTML 4.0 Transitional//EN">
<HTML>
<HEAD>
<title>WebForm1</title>
<meta name="GENERATOR" content="Microsoft Visual _Studio.NET 7.0">
<meta name="CODE_LANGUAGE" content="Visual Basic 7.0">
<meta name="vs_defaultClientScript" content="JavaScript">
<meta name="vs_targetSchema" content=_
"http://schemas.microsoft.com/intellisense/ie5">
</HEAD>
<body MS_POSITIONING="GridLayout">
<form id="Form1" method="post" runat="server">
</form>
</body>
</HTML>

At the top of Listing 11.2 is a directive. It tells the ASP.NET execution engine that a code-behind file of
default.aspx.vb is being used. When you use the Build and Browse functionalityusually accessed by rightclicking on an aspx file in the Solution Explorer and selecting Build and Browse within VS.NET to view this
pagetwo things happen. First, the page is compiled into a .dll; from that point on, whenever the page is
requested, ASP.NET will use its compiled copy of the page. Second, a browser window will open, showing the
results of executing the code. Figure 11.11 shows the resulting page.
Figure 11.11. default.aspx results

Note
All the code examples presented in this chapter are available from the publisher's Web site,
http://www.awprofessional.com .

Working with the DataGrid


When Microsoft released early versions of the .NET Framework SDK, the samples of how to iterate through a
collection of data were shown by use of a DataGrid User Control. A User Control has little or nothing to do
with an ActiveX control from days past. A User Control for use on a WebForm indicates the use of some kind
of templated functionality that executes server-side code to produce HTML for the client. This technology is
very powerful as it allows development of complex logic related to the user interface into a reusable,
compiled object that doesn't have any security worries or, worse, browser incompatibility issues. As Server

Controls are written to produce HTML, browser compatibility is up to the developer.


The DataGrid control is an excellent example of a control that allows the developer simply to bind a
DataSet to a DataGrid and get instant visual results. The code presented in Listing 11.3 shows the basic
query in Listing 11.1 being used to fill a DataGrid control. To use any sort of Server Control, you must
declare it. Once you've declared it, you can access its methods and properties just as you can any other
object on the page. To begin, we created a new WebForm in Visual Studio.NET and named it WebGrid.aspx.
Then, we placed a DataGrid control on the page, using the available controls from the WebForms tab of the
toolbox.
Listing 11.3 WebGrid.aspx

<%@ Import Namespace="System.Data.SqlClient" %>


<%@ Import Namespace="System.Data" %>
<%@ Page Language="vb" AutoEventWireup="false" Codebehind="WebGrid.aspx.vb"
Inherits="Novelty1.WebGrid"%>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML>
<HEAD>
<title>WebGrid</title>
<%Dim connString As String
connString = "server=(local);database=Novelty;TRUSTED_CONNECTION=Yes"
Dim conn As New SqlConnection(connString)
Dim cmd As New SqlCommand("select * from tblCustomer", conn)
conn.Open()
Dim dReader As SqlDataReader = cmd.ExecuteReader (CommandBehavior.CloseConnection)
DataGrid1.DataSource = dReader
DataGrid1.DataBind
dReader.Close()
conn.Close()
%>
<meta name="GENERATOR" content="Microsoft Visual Studio.NET 7.0">
<meta name="CODE_LANGUAGE" content="Visual Basic 7.0">
<meta name="vs_defaultClientScript" content="JavaScript">
<meta name="vs_targetSchema" content="http://schemas.microsoft.com/intellisense/ie5">
</HEAD>
<body MS_POSITIONING="GridLayout">
<form id="Form1" method="post" runat="server">
<asp:DataGrid id="DataGrid1"
style="Z-INDEX: 101; LEFT: 179px;
POSITION: absolute; TOP: 73px"
runat="server"
Width="640px"
Height="480px"
BackColor="#fffff5"
BorderColor="black"
ShowFooter="true"
CellPadding="1"
CellSpacing="1"
Font-Name="Arial"
Font-Size="8pt"

HeaderStyle-BackColor="#c0c0c0"
EnableViewState="false">
</asp:DataGrid>
</form>
</body>
</HTML>

When this code is executed against the Novelty database, the results should be similar to those shown in
Figure 11.12 .
Figure 11.12. WebGrid.aspx results

This server control provides a way to display the data without writing a single line of code related to the
actual logic of displaying the data. Again, to clarify, server controls basically are made up of pagelets. They
aren't complete aspx pages (in fact, they have an .ascx extension), but only specific sections, such as a
reusable form or code that could be used to show an error message provided on a common error page. You
can create these types of controls to meet any need for specific functionality embedded in a Web-based user
interface.

Note
In an environment of dynamic content, certain variables often remain static. This is usually the
case with a database connection string. In Listing 11.1 , the connection string is interned. That is,
it checks the .NET Runtime's memory heap to see if an instance of the string "connString" exists.
And, if it does, does it have the same value as this instance? If it does, use the instance of the
string that is already in memory instead of creating a new string object and filling it with the value
given. If it doesn't exist in memory, put it there with the current value assigned to it. This
approach actually conserves resources because the CLR performs these checks faster than it can

allocate memory for a new string. Throughout a project, you can reuse many strings in this
manner, thus enhancing performance of the application.
You shouldn't use this method for strings that frequently change because there's no use in
checking for an existing value if you know it won't be there. This is not to say that this method
should not be used with properties. There are certain times when a property will be required to
have a specific set of values, such as in the case of an enumeration. In these instances, it is
perfectly acceptable to use the Intern method to save a little memory as the application
executes. In Chapter 12 we describe this and other performance enhancing techniques.

[ Team LiB ]

[ Team LiB ]

Improving the Performance of ASP.NET Database Applications Through Stored Procedures


With all the advances made in .NET, many developers are beginning to rely more and more on the front end
of an application to provide functionality that makes data conform to the specific business rules of an
application. Although ASP.NET pages are compiled and execute faster than regular ASP pages, keeping the
business logic in the page is still not a manageable or very maintainable process for most organizations.
In most cases, the database server ends up being the most durable box from a hardware point of view. Most
developers will take no chances on whether the database server has an adequate number of processors,
memory, disk space, or built-in backup devices. Yet, for most operations, the database server, when
properly configured, is one of the least resource-intensive machines. Consider the following machine
configurations for a fictitious Web application.

Web server
Pentium 4 800 MHz, 1 GB RAM, RAID 5 18.1 GB SCSI Disk Array
Database server
Quad-Pentium 4 Xeon 800 MHz, 4 GB ECC-RAM, RAID 1 Disk Array for Operating System, RAID 5 72 GB
SCSI Disk Array for Database and Log Files, Redundant Network Interface Cards, and Redundant Power
Supplies
Machine

Specifications

This is where the use of stored procedures can really boost the performance of your application. Consider the
basic chain of events that happens in a Web request.
1. The user navigates to the Web page, enters some information in a form, and then submits that form to
the Web server.
2. The Web server parses the information and, as directed, performs some action to validate and/or
collect the information the user submitted.
3. A response is sent to the user, usually in the form of a Web page notifying her of success or failure.
4. The user continues through the application.
The goal is to limit the number of times that the user must send information to the Web server, yet at the
same time keep the application from being drawn down by too much code in the pages. Consider the
following steps from the computer's point of view.
1.
2.
3.
4.
5.

A request is received from a user, containing some data.


Is this data in a valid format? If so, put it in the database. If not, handle the errors.
Can a connection be made to the database? If so, go ahead with execution; if not, handle the errors.
Did the database get the data? If so, good; if not, handle the errors.
Is the client still there? If so, send response; if not, terminate the connection.

Obviously, the Web server has plenty to do without having to deal with a lot of added code in the pages. And
the preceding steps don't even take into consideration "dangling" objects that may have been created and

5.

not destroyed.
Next, we change the code from Listings 11.1 and 11.2 to reflect the use of a stored procedure to return data
to a page. From there, it's merely a matter of creating stored procedures that perform the tasks that we
need them to perform. This model creates an efficient two-tier application.

Warning
As with any ASP.NET application, when collecting data from form, you should use
Server.HTMLEncode(Request.Form("objectName") to prevent cross-site scripting attacks. This
method encodes the information as a string literal that SQL Server understands, eliminating risks
such as malicious users inserting "; TRUNCATE TABLE MASTER" into a textbox named
"txtFirstName" on a Web page and having SQL Server execute it.

Some developers seem to be uneasy about using stored procedures from Web-based applications. This
unease most often comes from the confusion regarding the Command/SQLCommand object, ADO or ADO.NET,
and the use of building parameters with the stored procedure. As covered in Chapter 4 , the setting of
parameters via code is done to "prequalify" the data by setting their type. That keeps SQL Server and .NET
from having to spend time trying to figure out the type of the data (integer, string, and so on) being sent. In
our case, we are executing a stored procedure with no parameters. The code in Listing 11.4 shows the
procedure created for this example.

Note
Using the steps outlined in the Adding the ASPNET User to the SQL Server Logins section earlier in
the chapter, you must configure the public role to have EXEC permissions on the stored procedure
created and used in Listings 11.4 and 11.5 .

Listing 11.4 sp_GetCustomersOrders

CREATE PROCEDURE sp_GetCustomersOrders AS


SELECT tblCustomer.LastName, tblCustomer.FirstName, tblOrder.OrderDate FROM tblOrder
INNER JOIN tblCustomer ON tblCustomer.ID = tblOrder.CustomerID
ORDER BY tblCustomer.LastName
GO

Note that the query is asking only for what we need. In Listing 11.3 , we used SELECT * and only certain
columns were shown on the page. We did so to emphasize that if you don't need it, don't ask for it. In the
interest of application scalability, being a minimalist must apply.
Now, let's take a look at the modified ASP.NET code in Listing 11.5 . Note that we had to add a line to handle
the date being returned by the stored procedure.

Listing 11.5 ShowCustomers SubRoutine

Private Sub ShowCustomers()


'This is just a simple function to get
'things started, showing the collection
'and displaying tblCustomer.
'Initialize the connection object with
'the connection string
Dim conn As New SqlConnection(connString)
'Also, initialize the command object with
'the SQL to be executed
Dim cmd As New SqlCommand("exec sp_GetCustomersOrders", conn)
conn.Open()
Dim dReader As SqlDataReader = cmd.ExecuteReader (CommandBehavior.CloseConnection)
While dReader.Read
Response.Write(dReader.GetString(0))
Response.Write("&nbsp;" & dReader.GetString(1))
Response.Write("&nbsp;" & dReader.GetDateTime(2))
Response.Write("BR")
End While
dReader.Close()
conn.Close()
End Sub

By using a stored procedure, we have increased efficiency. Specifically, stored procedures are compiled on
the database server and execute rapidly compared to dynamic queries. In Listing 11.1 , the SELECT
statement had to be interpreted each time by SQL Server before any data was returned. At this point, both
the page and the query have been compiled; from a performance perspective, this is a very nice feature of
the .NET Framework.

[ Team LiB ]

[ Team LiB ]

Summary
In this chapter we introduced you to the fundamentals of accessing SQL Server from ASP.NET in a manner
that is reliable and scalable. By the end of this chapter, you had seen how not only to execute dynamic
queries, but also to execute stored procedures from ASP.NET without having to use a DataGrid to display
the results. In this chapter we also touched on the middle tier, which we discuss in detail in the Chapter 12.

Questions and Answers

Q1:

What databases are accessible from ASP.NET?

A1:

ASP.NET shares the same accessibility to data that ADO.NET provides. That is, for any data
source that ADO.NET can connect to, ASP.NET will provide the same functionality.

Q2:

If everything is in XML, why not just use a parser?

A2:

A parser provides functionality to extract data from elements within a structure. ADO.NET
provides functionality to parse data either returned as XML or retrieved in the native format from
a database. In addition there are the performance benefits.

[ Team LiB ]

[ Team LiB ]

Chapter 12. Web Services and Middle-Tier Technologies


IN THIS CHAPTER

Using the Middle Tier to Provide Presentation Logic


Using Data in the Middle Tier
Exposing Objects Through Web Services
Putting It All Together
Perhaps you haven't yet had a chance to work in the true middle tier or have worked primarily with
applications that had a configuration such as a Web server and a database server. In that case, most of the
application logic was actually written in the ASP pages, and the database only held data. To most of us, this
area is simply referred to as plumbing. The application logic, be it the middle tier or spread across tiers,
makes the application happen. It may dictate what a user sees, but the user never sees it.
For years this type of plumbing was implemented as Visual Basic (or C++) .dll files known as COM objects.
The premise is simple: Create your code in a compiled object that can be shared, reused, or accessed
remotely. A common example is the frequently used lines of code that declare an ADODB.Connection and
an ADODB.Recordset object, which take a connection string and a query and return a recordset. This
logic could very easily be put into a function that returns a recordset object and takes the connection
string and/or query as parameters. In theory, this method was wonderful; however, in reality there were
problems with version control, remote access, and general lack of understanding in the developer
community.
Microsoft has made incredible progress with the way that middle-tier development can take place within
.NET. To start with, Microsoft opened low-level libraries to all the languages in Visual Studio.NET. As a result,
Visual Basic developers now have much easier access to items such as threads and marshalling in order to
control performance better. The next great stride was side-by-side versioning. It allows production and
development versions of code on the same machine without them bumping into each other. Side-by-side
versioning is made available by .NET's use of folder structures for locations as opposed to registry settings
for everything. Of course, these capabilities factor into the use of Web Services. Although not a new
technology in terms of Internet time, Web Services are perhaps the most exciting thing to happen to
distributed computing in quite some time. In this chapter we take you through the process of determining
what goes in the middle tier and show different ways of implementing it.

[ Team LiB ]

[ Team LiB ]

Using the Middle Tier to Provide Presentation Logic


At some point, it's going to happena requirement to perform some kind of validation on information that a
user is entering on a WebForm. Perhaps you have to validate that the user is entering a valid date (pun
intended), which usually is all that's necessary to create a rift between requirements and reality. In reality,
most developers know that this simple task could be carried out in numerous ways on the client-side, or
presentation layer. The requirement though is to limit the client-side code strictly to HTML output. Now the
significance of the term middle tier becomes clear: It is the area that handles what happens between the
client and the data the client wants to interact with.
Visual Basic.NET has a convenient built-in function, called IsDate, to handle the quandary of date validation.
It returns a Boolean value based on whether the information passed to it can be converted to a valid date
format. Where you use this function makes the difference. So, let's take a look at this function from the
highest level.
Create a page in your Visual Basic.NET WebForms Application and name it datecheck.aspx. On this page,
there will only be two elements: a server-side textbox control and a server-side button control. When the
button is clicked on, the information is sent to the server where the IsDate function is executed. From
there, the result is written back to the Web page. Listing 12.1 shows the complete datecheck.aspx.vb file,
and Listing 12.2 is the code for datecheck.aspx.
Listing 12.1 datecheck.aspx.vb

Public Class datecheck


Inherits System.Web.UI.Page
Protected WithEvents TextBox1 As System.Web.UI.WebControls.TextBox
Protected WithEvents Button1 As System.Web.UI.WebControls.Button
#Region "Web Form Designer Generated Code"
'This call is required by the Web Form Designer.
<System.Diagnostics.DebuggerStepThrough()>
Private Sub InitializeComponent()
End Sub
Private Sub Page_Init
(ByVal sender As System.Object, ByVal e As System.EventArgs_
Handles MyBase.Init
'CODEGEN: This method call is required by the Web Form Designer.
'Do not modify it with the code editor.
InitializeComponent()
End Sub
#End Region
Dim Msg As String
Private Sub Page_Load_

(ByVal sender As System.Object, ByVal e As System.EventArgs)


Handles MyBase.Load
Button1.Text = "Check Date"
TextBox1.Text = DateTime.Now.ToString
End Sub
Private Sub Button1_Click
(ByVal sender As System.Object, ByVal e As System.EventArgs)
Handles Button1.Click
Msg = IsDate(Request.Form.Item("TextBox1")).ToString
Msg += "<BR>"
Msg += Request.Form.Item("TextBox1")
If Page.IsPostBack Then
Response.Write(Msg)
Button1.Text = "Date Checked"
End If
End Sub
End Class

In Listing 12.1 there is no infringement on the client. All activity is happening on the server.
Listing 12.2 datecheck.aspx

<%@ Page Language="vb" AutoEventWireup="false" Codebehind="datecheck.aspx.vb"


Inherits="Novelty1.datecheck"%>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML>
<HEAD>
<title>datecheck</title>
<meta name="GENERATOR" content="Microsoft Visual Studio.NET 7.0">
<meta name="CODE_LANGUAGE" content="Visual Basic 7.0">
<meta name="vs_defaultClientScript" content="JavaScript">
<meta name="vs_targetSchema"
content="http://schemas.microsoft.com/intellisense/ie5">
</HEAD>
<body MS_POSITIONING="GridLayout">
<form id="Form1" method="post" runat="server">
<asp:TextBox id="TextBox1"
style="Z-INDEX: 101; LEFT: 10px; POSITION: absolute; TOP: 36px"
runat="server" Width="165px" Height="20px">
</asp:TextBox>
<asp:Button id="Button1"
style="Z-INDEX: 102; LEFT: 14px; POSITION: absolute; TOP: 73px"
runat="server" Width="104px" Height="25px" Text="Button">
</asp:Button>
</form>
</body>
</HTML>

In Listing 12.2, the runat=server directive is given to the form elements. That forces the actions placed in
the datecheck.aspx.vb file to happen. Listing 12.3 illustrates the actual HTML generated and sent to the
client. As with Server Controls, the control code itself is never sent to the client; only the HTML that results
from the control code being processed on the server is sent.
Listing 12.3 HTML client code

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">


<HTML>
<HEAD>
<title>datecheck</title>
<meta name="GENERATOR" content="Microsoft Visual Studio.NET 7.0">
<meta name="CODE_LANGUAGE" content="Visual Basic 7.0">
<meta name="vs_defaultClientScript" content="JavaScript">
<meta name="vs_targetSchema"
content="http://schemas.microsoft.com/intellisense/ie5">
</HEAD>
<body MS_POSITIONING="GridLayout">
<form name="Form1" method="post" action="datecheck.aspx" id="Form1">
<input type="hidden" name="_VIEWSTATE" value="dDwxNDg5OTk5MzM7dDw7bDxpPDE+Oz47
bDx0PDtsPGk8Mz47PjtsPHQ8cDxwPGw8VGV4dDs+O2w8Q2hlY2sgRGF0ZTs+Pjs+Ozs+Oz4+Oz4+Oz7
7Edv+v9RpR0ZNzrTLDqp+CeaP+Q==" />
<input name="TextBox1" type="text" value="9/30/2002 10:42:52 AM" id="TextBox1"
style="height:20px;width:165px;Z-INDEX: 101; LEFT: 10px;
POSITION: absolute; TOP: 36px" />
<input type="submit" name="Button1" value="Check Date" id="Button1"
style="height:25px;width:104px;Z-INDEX: 102; LEFT: 14px;
POSITION: absolute; TOP: 73px" />
</form>
</body>
</HTML>

Listing 12.3 demonstrates that almost any application logic could be stored in the middle tier. We deal with
the next level when we start working with the database. Connecting to a database and executing a query are
a bit more involved than just validating a date; some, but not much, more work is involved, as we show in
the following sections.

[ Team LiB ]

[ Team LiB ]

Using Data in the Middle Tier


So far, we've identified the presentation layer as basically what the client, or user, sees. We've also
identified the middle layer, or tier, as the plumbing that makes the application work. Now it's time to show
the final tier of the infamous "n-tier" architecturethe data tier. Usually, the data tier consists of a database,
relational or otherwise. It could be an XML file, .INI file, or any other data repository. For these examples,
though, we use SQL Server 2000 as the data repository.
In Chapter 11 , Listing 11.1 , we showed a simple example of using the middle tier to connect to the
database, execute a query, and return an object presented in the form of an HTML table. None of the code in
that listing affects any of the data returned; it simply displays it. The actual execution of the query even
happens in the database. Listing 12.4 repeats the code from Chapter 11 , Listing 11.5 , and we use it here to
show some different points (discussed following the listing).
Listing 12.4 ShowCustomers subroutine

Private Sub ShowCustomers()


'This is just a simple function to get
'things started, showing the collection
'and displaying tblCustomer.
'Initialize the connection object with
'the connection string.
Dim conn As New SqlConnection(connString)
'Also, initialize the command object with
'the SQL to be executed.
Dim cmd As New SqlCommand("exec sp_GetCustomersOrders", conn)
conn.Open()
Dim dReader As SqlDataReader = cmd.ExecuteReader(CommandBehavior.CloseConnection)
While dReader.Read
Response.Write(dReader.GetString(0))
Response.Write("&nbsp;" & dReader.GetString(1))
Response.Write("&nbsp;" & dReader.GetDateTime(2))
Response.Write("<BR>")
End While
dReader.Close()
conn.Close()
End Sub

Note in Listing 12.4 the use of a stored procedure, which is an excellent example of placing application logic
in the data tier of an application. The reason for this placement is quite simple: It is based on the complex
scientific fact that the shortest distance between two points is a straight line. If the code exists on the
database server, is compiled to run efficiently on the database server, and is executed against a database on
that same server, it will execute faster than if the code in the stored procedure resides in the middle tier.
However, this is not a perfect world, and use of stored procedures isn't allowed in all circumstances. This

restriction brings into play another complex scientific principlethat of the path of least resistance. Least
resistance in this case means that the next tier up from the data tier is used to execute the application logic.
Because there is a requirement that limits use of the client to display onlymeaning no client-side
scriptingit's time to look at how we can implement functionality that might be better served by the data
tier being placed in the middle tier.

Creating Reusable Middle-Tier Components


In this section, we take a simple query that could be a stored procedure and place it in a reusable object.
The goal is to show how creating functionality in one place can be used in multiple applications. To begin,
within an existing or new Visual Basic.NET project, right-click on the project name in the Solution Explorer.
Select Add and then Add Component. A dialog will prompt you to enter a filename. Enter GetRowCount.vb,
which will be the name of the class that you can access from any application when you've finished.

Note
All code samples from Chapter 11 and this chapter are based on a solution called Novelty1 . It is
the namespace used for every file created in these chapters.

Once you have created the empty component file, all that is needed is a little code. For this example, create
a function, GetRowCount, that returns an integer value. The code in Listing 12.5 comprises the complete file.
Listing 12.5 GetRowCount.vb

Imports System.Data
Imports System.Data.SqlClient
Public Class GetRowCount
Inherits System.ComponentModel.Component
Public Function GetRowCount() As Integer
Try
Dim connString As String
'Recall from Chapter 11 the discussion on
'String.Intern. If it already exists and
'has the same value, its memory location
'will be used instead of creating a new
'instance.
connString =server=(local);database=Novelty;TRUSTED_CONNECTION=Yes"
Dim conn As New SqlConnection(connString)
Dim cmd As New SqlCommand("select count(*) from tblCustomer", conn)
conn.Open()
Dim dReader As SqlDataReader =cmd.ExecuteReader (CommandBehavior.CloseConnection)
While dReader.Read
'Get what should be the first and
'only row in our result set.
GetRowCount = dReader.GetValue(0)
End While

dReader.Close()
conn.Close()
Catch
System.Console.WriteLine
("An error has occured " & Err.Description)
End Try
End Function
#Region "Component Designer generated code "
Public Sub New(ByVal Container As System.ComponentModel.IContainer)
MyClass.New()
'Required for Windows.Forms Class Composition Designer support
Container.Add(Me)
End Sub
Public Sub New()
MyBase.New()
'This call is required by the Component Designer.
InitializeComponent()
'Add any initialization after the InitializeComponent() call.
End Sub
'Component overrides dispose to clean up the component list.
Protected Overloads Overrides Sub Dispose(ByVal disposing As
Boolean).
If disposing Then
If Not (components Is Nothing) Then
components.Dispose()
End If
End If
MyBase.Dispose(disposing)
End Sub
'Required by the Component Designer
Private components As System.ComponentModel.IContainer
'NOTE: The following procedure is required by the Component
'Designer.
'It can be modified using the Component Designer.
'Do not modify it with the code editor.
<System.Diagnostics.DebuggerStepThrough()> Private Sub InitializeComponent()
components = New System.ComponentModel.Container()
End Sub
#End Region
End Class

Diving deeply into inheritance is beyond the scope of this book, but we can show a good example of what it
is. Although there is no direct code in the component for a ToString method, there is one once the
component has been compiled. It is due to the line of code

Inherits System.ComponentModel.Component

It brings in functionality that exists in the class

System.ComponentModel.Component

which inherits functionality from System.Object . Thus inheritance brings functionality into the class
without our having to do any work.
To complete the component, we simply right-click on the solution name in the Solution Explorer and select
Build. The component is now ready to use, but how can we use it? We create a WebForm named
GetRowCountTest.aspx. The code behind for this page is shown in Listing 12.6 . Note that, at the beginning
of the page, an Imports statement brings in the functionality of the GetRowCount component.
Listing 12.6 GetRowCountTest.aspx.vb

Imports Novelty1.GetRowCount
Public Class GetRowCountTest
Inherits System.Web.UI.Page
#Region "Web Form Designer Generated Code"
'This call is required by the Web Form Designer.
<System.Diagnostics.DebuggerStepThrough()>
Private Sub InitializeComponent()
End Sub
Private Sub Page_Init
(ByVal sender As System.Object, ByVal e As System.EventArgs)
Handles MyBase.Init
'CODEGEN: This method call is required by the Web Form Designer.
'Do not modify it with the code editor.
InitializeComponent()
End Sub
#End Region
Private Sub Page_Load
(ByVal sender As System.Object,
ByVal e As System.EventArgs) Handles MyBase.Load
Dim GRC As New GetRowCount()
Response.Write(GRC.GetRowCount.ToString)
GRC.Dispose()

End Sub
End Class

Calling the inherited Dispose method isn't necessary, but it does expedite clearing this object from
memory. Once the page has been created and code pasted or typed, you can use the Build and Browse
functionality from the Solution Explorer to preview the page. Using the Novelty database built throughout
this book, and accompanying scripts, you should get a Web page that simply has the number 2000 at the
top of it.

Using the Component from Another Application


Now that we have this gem of code, it's time to share it. To do so, we create a new application of type Visual
Basic.NET Windows Application and set a reference to the Novelty1.dll file located in the \bin/ directory of
the Novelty1 application. If you're not working in the Novelty1 namespaceor have created your own Web
applicationthe .dll file that you will need to reference is located in the \bin/ directory of that Web
application (or a Windows application, as the case may be). It is typically inetpub\wwwroot\<Application
Name>\bin.
To set a reference, right-click on the References element in the Solution Explorer and select Add Reference.
When the References dialog appears, click on Browse to locate the .dll just described. Then click on OK on
the References Dialog. For this example, the file is located in
C:\Inetpub\Wwwroot\Novelty1\bin
and is named Novelty1.dll.
In the code for your Windows application previously described, note that, if you start an Imports statement,
the namespace Novelty1 is available to you. From here, the rest of the code looks almost like that in Listing
12.6 . Listing 12.7 represents the code for the default form object created whenever you have created a new
Windows application in VS.NET.
Listing 12.7 Form1.vb

Imports Novelty1.GetRowCount
Public Class Form1
Inherits System.Windows.Forms.Form
#Region "Windows Form Designer generated code"
Public Sub New()
MyBase.New()
'This call is required by the Windows Form Designer.
InitializeComponent()
'Add any initialization after the InitializeComponent() call.
End Sub
'Form overrides dispose to clean up the component list.
Protected Overloads Overrides Sub Dispose(ByVal disposing As Boolean)
If disposing Then
If Not (components Is Nothing) Then
components.Dispose()
End If
End If

MyBase.Dispose(disposing)
End Sub
'Required by the Windows Form Designer
Private components As System.ComponentModel.IContainer
'NOTE: The following procedure is required by the Windows Form
'Designer.
'It can be modified with the Windows Form Designer.
'Do not modify it with the code editor.
Friend WithEvents Label1 As System.Windows.Forms.Label
Friend WithEvents Button1 As System.Windows.Forms.Button
<System.Diagnostics.DebuggerStepThrough()> Private Sub InitializeComponent()
Me.Label1 = New System.Windows.Forms.Label()
Me.Button1 = New System.Windows.Forms.Button()
Me.SuspendLayout()
'
'Label1
'
Me.Label1.Location = New System.Drawing.Point(8, 16)
Me.Label1.Name = "Label1"
Me.Label1.Size = New System.Drawing.Size(248, 16)
Me.Label1.TabIndex = 0
Me.Label1.Text = "Label1"
'
'Button1
'
Me.Button1.Location = New System.Drawing.Point(264, 8)
Me.Button1.Name = "Button1"
Me.Button1.Size = New System.Drawing.Size(72, 24)
Me.Button1.TabIndex = 1
Me.Button1.Text = "Test It!"
'
'Form1
'
Me.AutoScaleBaseSize = New System.Drawing.Size(5, 13)
Me.ClientSize = New System.Drawing.Size(344, 54)
Me.Controls.AddRange(New System.Windows.Forms.Control() Me.Button1, Me.Label1)
Me.Name = "Form1"
Me.Text = "Form1"
Me.ResumeLayout(False)
End Sub
#End Region
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs)
Handles Button1.Click
Dim GRC As New Novelty1.GetRowCount()
Label1.Text = "There are" & GRC.GetRowCount.ToString & "rows in the table."
GRC.Dispose()
End Sub
End Class

The only modifications we made here were to add a Label object and a Button object. The label's text isn't
set until the code from our object has been executed. The next step from here is Web Services.

[ Team LiB ]

[ Team LiB ]

Exposing Objects Through Web Services


The next levelthe new wave that's all the rageis Web Services! Actually, the premise of using the Web as
a way to transfer data between two points was one of the original concepts that drove its development. This
concept had several limitations, though, and only recently have the governing bodies such as the World Wide
Web Consortium (W3C) started to implement standards for such technology. Implementation of Web
Services in .NET is based on these standards and brings with it the chance to use XML to identify generically
and send data from Point A to Point B or Point B to Point B (B2B joke, sorry).
The business case for creating Web Services is a simple premise that has many practical complications. The
basic need for Jones Novelties, Incorporated, is to be able to allow other companies to access data quickly
and in a manner that doesn't require them to create a user interface to access that dataWeb Services are
perfect for this task.
Before going further, you need to be aware of a few things regarding the technology of Web Services. If
you're already a pro, you may want to skip the next few paragraphsor read them anyway as a refresher.
Web Services, by definition, are objects that exchange data via an Internet protocol, such as HTTP, using
XML to define either data or a set of instructions for the server to perform. These instructions may or may
not include the return of data. For example, you may send a request to a Web server that is hosting a Web
Service that looks like
http://www.someserver.com/services/dataserver.asmx?op=AddUserToDB&F
Name=John&LName=Doe.
In this case, we're using HTTP protocol and a GET request to call a service called dataserver that has a
function AddUserToDB that takes the parameters FName and LName. Presumably, this function is adding a
user to a database. It would be great, though, if we had a way to access a Web Service and have it tell us
everything that it did. Luckily for us, Microsoft and others have thought of this possibility and created the
Web Services Description Language (WSDL, pronounced "wiz-dull"). WSDL looks at the code in a Web
Service and determines what it should tell users (human or otherwise) about the Web Service and what it
does. We present an example of this function later, in the Accessing Web Services Programmatically section.
As stated, a Web Service can be accessed via the HTTP commands GET and POST. The main difference
between GET and POST is that the data appended to the request is in the form of a query string for a GET
request and encapsulated in the body of a POST request. Additionally, the Simple Object Access Protocol
(SOAP) can be used to communicate with Web Services.
SOAP is a protocol that allows messages to be exchanged between servers via an envelope. Within this
envelope are the instructions for the request. With SOAP, the envelope is specifically crafted XML as, for
example,

<?xml version="1.0" encoding="utf-8"?>


<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd=http://www.
w3.org/2001/XMLSchema xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>

<ShowGetRowCountResponse xmlns="http://localhost/">
<ShowGetRowCountResult>int</ShowGetRowCountResult>
</ShowGetRowCountResponse>
</soap:Body>
</soap:Envelope>

To send a request such as this, you would need to create your own Web request and send it to the server.
Fortunately, .NET provides ample functionality to handle this task in the System.NET namespace.
In an attempt to demystify, yet not understate, Web Services, you can look at them like this:
You have a database with the names of 100,000 people all aged 1421. This data is kept up to date
constantly by visitors to your Web site answering the "Question of the Day." One day, Pepsi gives you
a call and wants to tap into your database for, say, $1,000,000 a month. How do you get Pepsi the
data? This is where Web Services can come into play.
Another scenario may look like this:
You're running a Web site that offers authentication functionality to other Web sites. They pay you a
monthly fee and send you the username and password from a Web page that sets cookies on clients,
indicating whether the users were authenticated. Sounds a lot like Passport, doesn't it? Here again is
an excellent opportunity to use Web Services.
Web Services gives a developer a way to expose functionality over the Web (Internet) or an internal network
(intranet) that resides on the same or different machine from any other applications and/or databases that
the developer may or may not know about. Web Services are what COM was supposed to be, but we can call
Web Services from any machine connected to the Internet or internal networkon any operating system.
Enough hype, let's take our component and add a function or two to it and see what happens.

Exposing an Existing Object Through a Web Service


It just doesn't get much easier than this. In most cases, we can expose the existing functionality of a
component through a Web Service by simply placing code in the Web Service that exposes the method as
public and has a return type. For example, consider our lowly GetRowCount object. To expose the
GetRowCount method of the GetRowCount class through a Web Service, complete the following steps.
1. In the Novelty1 project, create a new Web Service, NoveltyServices.asmx.

Note

2.
3.
4.
5.
6.
7.

If you're in a different namespace and need to set a reference, see the Using the Component
from Another Application section earlier in this chapter.
In the .asmx file, create a new WebMethod, ShowGetRowCount , that returns an integer. If you're
unsure how to do so, refer to the code shown in Listing 12.8 .
Add the three lines of code in the function ShowGetRowCount () in Listing 12.8 .
Right-click on the solution name in the Solution Explorer and select Build.
After the build is complete, right-click on the NoveltyServices.asmx file and select View in Browser.
Once the browser has loaded the page, click on the ShowGetRowCount hyperlink at the top of the
page.

5.
6.
7. View the results.
Listing 12.8 NoveltyServices.asmx.vb

Imports System.Web.Services
Imports Novelty1.GetRowCount
<WebService(Namespace:="http://localhost/")> _
Public Class NoveltyServices
Inherits System.Web.Services.WebService
#Region " Web Services Designer Generated Code "
Public Sub New()
MyBase.New()
'This call is required by the Web Services Designer.
InitializeComponent()
'Add your own initialization code after the
'InitializeComponent() call.
End Sub
'Required by the Web Services Designer
Private components As System.ComponentModel.IContainer
'NOTE: The following procedure is required by the Web Services
'Designer.
'It can be modified with the Web Services Designer.
'Do not modify it with the code editor.
<System.Diagnostics.DebuggerStepThrough()>
Private Sub InitializeComponent()
components = New System.ComponentModel.Container()
End Sub
Protected Overloads Overrides Sub Dispose(ByVal disposing As Boolean)
'CODEGEN: This procedure is required by the Web Services
'Designer.
'Do not modify it with the code editor.
If disposing Then
If Not (components Is Nothing) Then
components.Dispose()
End If
End If
MyBase.Dispose(disposing)
End Sub
#End Region
<WebMethod()> Public Function ShowGetRowCount() As Integer
'These three lines of code haven't varied much.
Dim GRC As New GetRowCount()
ShowGetRowCount = GRC.GetRowCount
GRC.Dispose()
End Function
End Class

It really is that simple. Assuming that you had placed this code on a public Web server, anybody with a Web
browser could navigate to the page and execute the functionality. More likely than not, though, accessing
this code over the Web would be done programmatically. Let's see how that works.

Accessing Web Services Programmatically


The final level in this process is being able to use your program code to access a Web Service and use its
functionality from within your applicationbe it WinForms or WebForms.
Most of how to connect an application to an existing Web Service is taken care of when you create a Web
reference so that's what we look at first. Recall that there are many ways to use this connection. Your
request could be a simple HTTP GET request, which is usually a URL followed by a question mark (?) and
ampersand delimited parameters such as
http://search.yahoo.com/bin/search?p=VB.Net
The request may be an HTTP POST request, whereby the parameters are passed in the HTTP headers.
Another popular option is the Simple Object Access Protocol whereby specially crafted XML is sent to the
Web server and the XML returned by the Web Service is sent back in a container established by the SOAP
request.
You can also create a new Visual Studio.NET Visual Basic.NET Console Application. The console is quite basic
and, as yet, we haven't used it. The principle, though, applies to any type of project. Once you have created
your project, right-click on the References element in the Solution Explorer, and select Add Web Reference.
You will get a dialog similar to that shown in Figure 12.1 . If necessary, simply replace the URL shown at the
top of the dialog and press Enter. You should momentarily have the results shown in Figure 12.1 . These
results are made possible by the Web Services Description Language, which embeds metadata in the code of
the Web Service that describes its contents. Click on Add Reference and the dialog will close. In the Solution
Explorer, you should now be able to see that the reference has been added as a Web reference, similar to
that shown in Figure 12.2
Figure 12.1. Add Web Reference dialog

Figure 12.2. Web references in Solution Explorer

Note

Should the directory or Web server require any sort of authentication, that code must be present
in your application and not in the Web Service.

Now that your Web reference has been added, you're just three lines of code from having an executable
application that uses functionality across the Internet as if it were local. Listing 12.9 shows the entirety of
ConsoleApplication1, Module1.vb.
Listing 12.9 Module1.vb

Module Module1
Sub Main()
Dim GRC As New localhost.NoveltyServices()
Try
System.Console.WriteLine(GRC.ShowGetRowCount.ToString)
GRC.Dispose()
Catch
System.Console.WriteLine(Err.Description)
End Try
End Sub
End Module

Here, GRC is dimmed as localhost.NoveltyServices. The reason is that the server hosting the functionality in
this example is named localhost. It is also the namespace set in Listing 12.8 as the namespace for the Web
Service. Namespaces should be either something unique (such as the name of your company) or the fully
qualified domain name that the server hosting the functionality will have. For example, if you were at
Microsoft, the namespace you might use would be www.microsoft.com . Then, when someone needed to
access your Web Service from his code, he would declare an object as

new www.microsoft.com.objectname

Again, once you have the code in place for module1 and the Web reference set, right-click on the solution
name in the Solution Explorer and select Build. To execute this program, use a command window (DOS
Prompt) to navigate to the directory where the application exists. By default, it will be
c:\documents and settings\<username>\My Documents\Visual Studio Projects
From there, there will be a directory with the name you gave the console application and within that
directory will be a bin directory. It contains the .exe file that is your application. From the command prompt,
type the filename of the application including the .exe extension and press Enter. Within a few seconds, you
should see a number representing the number of rows in the tblCustomer table of the Novelty database.

[ Team LiB ]

[ Team LiB ]

Putting It All Together


Throughout the last several chapters, we have described many technologies and have given examples that
centered on a common goal: maximizing the use of a database to provide real business benefit. The literal
English translation of "business benefit" means saving money. The use of technology to save money is often
subjective, and frequently the words technology and savings are mutually exclusive.
From the beginning of this book, where we explained planning and creating the database, to the last few
paragraphs, where we explained how to truly extend your functionality by using reusable code and Web
Services, the goal has remained the sameto create the most robust application possible by using the most
efficient means possible. Many of the code samples in the book can literally be copied and pasted into an
application and used with little to no change (mostly in the queries). We firmly believe that, by using the
information we have presented, you will be able to create and deploy successfully an application by using
Visual Basic.NET and SQL Server whether it be a traditional client-server (thick-client, WinForms) or Webbased (thin-client, WebForms) application.

[ Team LiB ]

[ Team LiB ]

Summary
In this chapter, we described the middle tier and how it can benefit you from both performance and
reusability points of view. We also showed you how to create a reusable component that can then be
exposed through a Web Service and accessed from almost any application. We presented these important
concepts in a relatively small space. We did so intentionally to show you just how simply they can actually be
implemented without diving into which memory location gets called what when a Web request comes
through.

Questions and Answers

Q1:

Can I call a .NET Web Service from Java?

A1:

Yes. Without showing a huge code sample, Java provides a java.net library that has functionality
allowing you to connect to a URL, via HTTP, of a Web Service, running on the .NET platform, and
then using Java's XML classes to parse the results.

Q2:

What about exposing Web Services through my firewall?

A2:

Web Services typically run on port 80 (the same as HTTP), so there are no special considerations
from a system administration point of view. At the lowest level, a Web Service is really just a
Web page that has no GUI. From a programming point of view, though, you may have to code
for authentication methods as set by the Web server.

[ Team LiB ]

Das könnte Ihnen auch gefallen