Beruflich Dokumente
Kultur Dokumente
Introduction
A good plan, violently executed now, is better than a perfect plan next week.
George S. Patton (1885 1945)
Never put off until run time what can be done at compile time.
David Gries, Compiler Construction for Digital Computers
Chapter 1
Who is this book for? Beginners who have some experience programming in C# and are familiar with basic programming terminology. Intermediate developers who want to learn new techniques to shorten their development time. Technology strategists who are investigating this approach in order to choose a platform for a project. Those who want to put together a quick-and-dirty proof of concept for a database-driven application. Those who have some familiarity with SQL programming. (You do not need to be an expert, but you do need a basic understanding of the SQL syntax to use and understand the generated framework.) Those wishing to learn cutting edge development skills and techniques. Small development teams. Using these methods, even a single developer can make a powerful database-driven application and deploy it to their organization in a short amount of time. No large budget or department of developers required.
Who isnt this book for? Non-Windows developers. Developers who are completely unfamiliar with object-oriented programming. We are not going to explain all the details of OOP and you may not understand the concepts or code if object-oriented programming is completely new to you. Developers with no SQL programming experience. While LLBLGen Pro will help you compensate for weakness in SQL, you may have a difficult time understanding and using the framework to write queries if you have no understanding of SQL syntax. Fundamentally, LLBLGen Pro is a wrapper (and more) for SQL, and knowing the basics will ensure that you can take full advantage of the tools. Those who are so familiar with SQL that they think in stored procedures and refuse to learn new techniques. You will want to avoid the temptation to just write a stored procedure instead of figuring out a way to use the generated framework. Like anything else, learning these new techniques takes time. Be patient, and you will achieve the results you want.
How to read this book This book is a practical walk-through that will build a sample applicationstep-by-stepwith directions, screenshots, and code samples. The book is intended to provide all the information necessary for a beginner to fully learn and grasp the tools. In order to gain maximum benefit, be sure to follow along with the sample application, performing the directions on your machine as explained in the text. However, if you are an intermediate user more familiar with the underlying concepts, you may elect to simply read the chapters and examine the code samples until you understand what is presented. Optionally, you can create your own project while reading through the book, using the concepts presented, and making substitutions where necessary to apply it to your database schema and make it fit your application requirements.
Prerequisites
In order to fully utilize this book, you will need: Visual Studio .NET 2005 (highly recommended) Currently in full release as of November 2005, you can buy this software from Microsoft at http://msdn.microsoft.com/howtobuy. Visual Studio .NET 2005 has a variety of versions and prices. Look for a version that includes C# and that allows you to develop class libraries and Windows applications. (You can use older versions of Visual Studio, but you will miss some of the newer, time-saving features. Refer to Figure 1.1 for a detailed
2
Introduction
Figure 1.1. Visual Studio .NET 2005 version and feature comparison Source: http://msdn.microsoft.com/vstudio/products/compare/
*This book focuses on Windows forms although many of the techniques presented are also transferable to web forms.
comparison of the versions of Visual Studio .NET 2005 available and the versions which include the features necessary to take full advantage of this book.) SQL Server 2005 (or other compatible database required) In this book we will be building database-driven applications with SQL Server 2005. Microsoft offers a variety of SQL Server productsincluding a limited version that is available for freeand developer versions that accompany specific versions of Visual Studio .NET 2005. Please note that we will not be covering the creation of your database or designing your schema. We are assuming that you have already developed your database or already have a database available. Instead of SQL Server 2000/2005, you can also use previous versions of SQL Server, Microsoft Access, Firebird, Oracle, or MySQL, and for the most part, the process will be the same. Note, however, that SQL Server 2005 was used exclusively in the creation of this book. Therefore, if you use other database applications, your results may vary. LLBLGen Pro (required)1 Version: 1.0.2005.1. LLBLGen Pro is an O/R mapper. This tool will take an existing database schema and generate a data access tier (and more!) in a matter of seconds. LLBLGen Pro is available from http://www.LLBLGen.com for about $270 USD (EUR 229) and can be used by your entire development team. Although the product is an extra item to purchase, it is an invaluable tool for developers working with databases (a fully-functional 30 day demonstration version is also available and will allow you to work through the exercises in this book). Keep in mind that there is a free version of LLBLGen available, but we will not be using it because it uses stored procedures exclusively and only does a fraction of what the current version will do. The original version was entirely reengineered, rewritten, and released as
1) Please note that the products discussed in this book are recommended on their merit alone; the author is not employed by Solutions Design (the creators of LLBLGen Pro) or any other software company, and does not receive any kind of commission or compensation from any of these companies.
Chapter 1
Figure 1.2. Lines of code generated/automated versus hand-coded in each project in this book
LLBLGen Pro. You will not be able to follow any of the code samples in this book without the retail version of LLBLGen Pro. You may be asking yourself, Arent those programs expensive? Why not do it by hand? Our answer is These programs save time! We prioritize speed (but not at the expense of reliability or maintainability) and, therefore, recommend to you that you consider how much time and energy these programs can save you and not only the prices of the products. Looking at the projects developed in this book will serve as a good example (Figure 1.2). While you could write the same Windows application by hand without Visual Studio, the automation that Visual Studio affords can be tremendous. In our example application alone, about 68% of the code in the Windows application project is generated automatically. Most of this code is written as elements in the GUI are configured visually; only 32% of the code was completely written by hand. And the differences are even more pronounced using LLBLGen Pro in the applications class library, which contains the business logic and data access layer (these terms are discussed later). In this project, over 98% of the code is automatically generated. What the developer adds by hand amounts to a mere 1,139 lines of codeless than 2%. (Keep in mind that this application is in the beginning stages of development, and that for this reason you will be adding much more custom code.) Another benefit of the code written by Visual Studio and LLBLGen Pro is that it is well commented, well spaced, and easy to read, adding to the line count of generated code. The main point here, however, is that with these tools you can take a great leap forward on Windows projects and projects that use databases. Automating repetitive code is a major part of rapid development, and it is worth paying for. Now, lets take a look at the other concepts and principles that make this method of development a desirable choice.
O/R Mappers
An O/R Mapper creates classes defining objects that correspond to the structure of your database. Every row becomes an entity and every table becomes an entity collection. The fields of the database table become public properties of the entity object. The framework also builds in constructors and other useful methods to find entity objects, set their properties, and save them back to the database with just a few lines of code.
Introduction
If you are unfamiliar with O/R Mappers, take a look at Figure 1.3. Here are two tables from a database. The tables are named Individual and AddressBook and you will notice a relationship between the two. After running an O/R Mapper on this schema, you will get a class library that you can immediately begin referencing in your projects. To use a row from this database, you could write the following code in your project (Example 1.4). 1 2 3 4 5 // C# Example IndividualEntity MyIndividual = new IndividualEntity(23); MyIndividual.FirstName = Joe; MyIndividual.AddressBook[0].City = New York; MyIndividual.Save();
Example 1.4. O/R generated code example
Lets walk through this code step-by-step. The code in Line 2 automatically retrieves the Individual record with an IndividualID of 23 and loads it into a custom object called an IndividualEntity (this name comes from the original table). As you can see in Line 3, all of the fields of the original table are properties that can be accessed and changed2. In Line 4, a related record in the AddressBook table was accessed and a property changed. And in Line 5, you see how easy it is to save those changes back to the database. Notice that all of these actions can be performed without writing any other extra code by hand, anywhere else. The generated code from the O/R Mapper handles all of the steps that you would normally have to code yourself, saving you from having to: Find the database server. Log in and open a connection to the database server. Select the particular database containing the information you want. Find the correct table. Find the correct row. Read all the values for that row. Convert every type of value from its SQL data type into the .NET data type while checking and handling the possibility of a null value. Present those values in a strongly-typed format, so the consumer knows exactly what kind of object to expect (string, integer, array, etc.) and there are no surprises at run-time. Create a container to temporarily hold the values while they are being modified. Retrieve data from another row in a related table. Manage which values have changed and make appropriate INSERTs, UPDATEs, and DELETEs in the appropriate tables in the database to reflect those changes. Close the connection.
Whew! Thats a lot of time saved! And thats just the beginning. For those who are not familiar with all of the aspects of programming, lets elaborate on what it means to be strongly-typed and why strongly-typed objects are so helpful to developers.
Strongly-Typed Objects
A major feature of using an O/R Mapper to auto-generate your code involves the use of strongly-typed objects. Instead of exposing simple and generic properties and methods, your generated code should expose specific objects you will actually be using. To understand the advantages of working in this manner, consider this analogy. When Bob goes home every day from work, he puts his keys on the table, drops his briefcase on the floor, and heads straight to the kitchen to make himself dinner. Every day, Bob does the same thing: he grabs a frozen dinner from the freezer and pops it in the
2) More specifically, all columns in the table can be read and columns that are non-key, non-calculated fields can be changed. These concepts are explained later.
Chapter 1
microwave. Bob has performed these actions so often that he does not really think about doing them each day. He just opens the freezer, grabs something, and sticks it in the microwave. We could express this particular freezer-tomicrowave exchange in C# as the following: 1 Object MyDinner; 2 MyDinner = House.Freezer.GetObject(); 3 House.Microwave.Cook(MyDinner);
Example 1.5
You do not have to be a programmer to know that this situation is a disaster waiting to happen! Most of the time whatever Bob grabs from the freezer will probably be a frozen dinner, or at least something edible. But can he really rely on mindlessly pulling anything from the freezer? If Bob has kids, he will not know from one day to the next whether he will find stuffed animals in the fridge, pop-tarts in the DVD player, or razor-sharp toys peppered along the staircase. Bob would be wise to check exactly what it is he is pulling out of the freezer before he sticks it in the microwave. Now here is the same code improved by using a more specific object. 1 FrozenDinner MyDinner; 2 MyDinner = (FrozenDinner)House.Freezer.GetObject(); 3 House.Microwave.Cook(MyDinner);
Example 1.6
We have improved this code by using a FrozenDinner object. Now, if we try to grab something from Bobs freezer that is not a FrozenDinner, we will get an error when we try to cast it. We could improve this even further by checking the type of the object before we cast it and accounting for the possibility of a non-FrozenDinner object. If we used the as keyword instead of casting the object, we would eliminate the chance of a casting exception, but then we would need to check for a null FrozenDinner object before trying to cook it. In the .NET world, this kind of situation is very common when you access data from external sources. One of the more common objects you will use is a DataTable. In .NET, DataTable objects are wonderful objects and extremely flexible. But when you read data from a database into a DataTable and you need to get specific with the contents of a particular field on a particular row, the .NET framework only gives you an object of type well object. A plain object is about as generic as you can get, and the .NET framework does this intentionally to give you maximum flexibility. But if you assume that that object is a string, and will always be a string, you are entering the world of assumptionsand code that is built on assumptions is brittle and unpredictable. If you accidentally change the name of a column in your database or reverse the order of columns in a set of records, you are asking for trouble. Minor changes can doom your code, and the worst part is that you will not know things have gone wrong until your code is running and the code fails miserably simply because what you always assumed would be a string happened to be an integer or a boolean. Now lets return to Bobs frozen dinner for a moment. We would eliminate a lot of guessing and unnecessarily complex code if the method that gave us the frozen dinner simply returned an object of type FrozenDinner and not an object of type object. That would save us the trouble of casting it and accounting for all the possibilities of a nonFrozenDinner object. Consider this final version of C# code: 1 FrozenDinner MyDinner; 2 MyDinner = House.Freezer.GetFrozenDinner(); 3 House.Microwave.CookFrozenDinner(MyDinner);
Example 1.7
In the real world, our database solution might entail extending a DataTable and specifically defining the type of every column. It might also entail creating a custom class and writing methods that read the data from the database table and add it to the properties of the custom class. Repeating this process by hand, for every table, stored procedure, and
6
Introduction
view in your database, while a wonderfully effective programming practice, would take you the rest of your natural life. Just consider how long it would take to manually write a new method for every kind of object Bob might possibly want to take from his freezer. Unfortunately, these painfully slow methods of development are very common. But there are several ways to automate this process, which we will discuss in a moment. The other general principle related to strongly-typing worth mentioning is that compile-time errors are always preferable to run-time errors. And of course, having no errors is the most desirable! Until computers begin to write their own code without human input there will always be errors of one kind or another. But not all errors are created equal. Compiletime errors, which arise during compilation, are easier to fix because they happen 100% of the time. If you have one of these, you cannot compile your application no matter how many times you try. You must fix the problem before continuing. On the other hand, a run-time error which arises as the application is running will compile 100% of the time, but may only occasionally throw an error while executing. These kinds of errors are harder to test and harder to catch because they are inconsistent and only occur when a particular function is called and when specific conditions are met. Now, consider the three different frozen dinner code examples mentioned earlier. In the first example, we will never get any kind of error when compiling or executing (which is good), but unfortunately for Bobs household, we could end up with strange items in the microwave (not good). In the second example we have prevented putting nonFrozenDinner objects in the microwave. We will have no compilation errors (good), but have a real chance of run-time errors if the object in the freezer is not a frozen dinner (not good). In the last example we get the possibility of compile-time errors if we code incorrectly (very easy to fix), no run-time errors (very good!), and still no strange objects in the microwave (what we hoped for). Although there are many ways to solve this problem, the third method is the most reliable, error-free solution.
In the newest version of LLBLGen Pro, however, queries can be constructed in an intuitive manner more consistent with normal C# syntax. Consider Example 1.9 , where the same predicate has been written in native language. The end result is more readable code and less time spent learning the syntax. 1 IPredicate MyPredicate = (EmployeeFields.HireDate >= HireDateAfter) & 2 (EmployeeFields.SalariedFlag == true); 3
Example 1.9. Creating predicates with natural language construction
Chapter 1
Native language filter construction in C# is a helpful and time-saving new feature in LLBLGen Pro, and throughout this book, we will use this technique in code examples.
Introduction
To utilize this DataSet you can simply use a TableAdapter, which is an object that is used to retrieve data from the database in order to fill the DataSet. If you open the XSD file in Notepad, you can see the underlying structure of the DataSet (Figure 1.12). Notice that SQL statements are included in the description of this DataSet. Each table description contains the specific SQL statement necessary to perform a SELECT, INSERT, UPDATE or DELETE action. Although both the LLBLGen Pro method and Visual Studio's strongly-typed DataSet objects are solutions that provide a data access layer, they are fundamentally different approaches. Lets take a look at some of the reasons why LLBLGen Pro is the more usable of the two methods. Strongly-typed DataSet advantages: Using strongly-typed DataSet objects is a good technique that implements a number of our best practices. First, you can quickly and easily reap the rewards of using strongly-typed objects and have assistance in creating your SQL statements and stored procedures. The strongly-typed DataSet also automates the process of consuming the data in your C# code; thus, both the query-generating and consuming features save time. The custom DataSet can also be referenced across your projects, becoming a handy and reusable data access layer. A key advantage of this method over LLBLGen Pro is that you do not need any other third-party software: this functionality is built into Visual Studio. Strongly-typed DataSet disadvantages: Users of strongly-typed DataSet objects encounter several setbacks, all of which LLBLGen Pro helps address: Query limitation: This disadvantage stems from the fact that SQL statements used by strongly-typed DataSet objects are created when the DataSet is designed, not when it is used. With LLBLGen Pro objects, however, the SQL code is generated as the object is used. You can find the SQL statements by looking inside the DataSet's XSD file, but in looking through the code that LLBLGen Pro produces you will find no SQL statements anywhere. The LLBLGen Pro dynamic query engine for SQL Server creates the statements only when the data is retrieved from the database. Because the SQL statements are all created when the DataSet is defined, if you need a new query that has not been defined in advance, you will always need to redesign your DataSet by adding the new query before you can consume it elsewhere in your code. With LLBLGen you can create and consume the query in your C# code by using the LLBLGen Pro frameworkall without changing the underlying data layer. You would only need to change your data layer (by regenerating your code) if you made changes to your schema or you add a new type of database object. Difficulty propagating schema changes: When you do make changes to your schema, strongly-typed DataSet objects do not provide a built-in method for propagating schema changes to your DataSet definition and updating your SQL statements. You would need to delete the corresponding DataSet table,re-add it from the database, and then re-create all of the custom SQL queries that you had defined for that table (since only one query is added by default). Fundamentally, there is no built-in way for Visual Studio to refresh your DataSet schema with changes from your database's schema. If you forget to drop and add each table that changed, or refresh the list of columns that have new names or data types, your code will mysteriously begin
PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
9
Chapter 1
to throw exceptions. LLBLGen Pro makes it easy to bring those changes to your code without losing any of your customization. In Chapter 12, you will see how LLBLGen Pro automatically scans your entire database and automatically makes the necessary changes to your project for you. Limited data layer functionality: In order to traverse many-to-many relationships between tables, DataTable objects require you to use intermediate tables. LLBLGen Pro allows you to traverse relationships between Entities more easily than you can using DataTable objects. You actually can traverse a many-to-many relationship by skipping over the intermediate tablean action that is not possible with a strongly-typed DataSet. No help with business logic layer: A strongly-typed DataSet does not help you with your business logic layer. LLBLGen Pro can optionally create a business logic layer shell that saves you the trouble of creating one yourself. We will use this method in our walk-through. Requirement of adapters in order to get data: In order to retrieve data for your strongly-typed DataSet, you must first create a TableAdapter object. If you use the self-servicing template when generating LLBLGen Pro code, you do not need any extra object like a .NET TableAdapter to retrieve data. It is as if your DataRow objects fill themselves with data automatically-and this means you have to write fewer lines of code. No support for custom SQL queries: Even with the Visual Studio automated tools, you will still need to get your hands dirty writing SQL: some queries are too complex to use Query Manager. The LLBLGen Pro framework allows you to generate complex queries and replace the logic of many stored procedures without writing any SQL yourself.
These features make LLBLGen Pro a compelling choice for database access. But by now, you may be wondering to yourself, If O/R Mappers are so great, why isnt Microsoft using this approach? In fact, Microsoft is working on it right now. Originally, Microsoft was developing a project they dubbed ObjectSpaces, which was a method of representing relational data as objects. This project was eventually assimilated into new projects not yet completed3. One of those projects is named WinFS, which will contain technology that is similar to O/R Mappers. Quentin Clark, the project management team leader for WinFS, stated in his blog, We are in the process of building-out the next version of ADO.NET to have new features that provide a data model, object-relational mapping, and flexible query infrastructure. The new data model is about entities, and the WinFS data model of Item types is built on that model.4 WinFS has been under development since 2002 and is currently in Beta 1. The final release date for WinFS is uncertain, but it will certainly be after the next version of Windows (Vista) is released. The other main project from Microsoft is named LINQ, which stands for Language INtegrated Queries. According to the Microsoft Developer Network LINQ is a codename for a set of extensions to the .NET Framework that encompass language-integrated query, set, and transform operations. It extends C# and Visual Basic with native language syntax for queries and provides class libraries to take advantage of these capabilities.5 LINQ will allow the developer to use queries written in C# statements against many different kinds of objects loaded into memory. The scope of this project is much wider than sorting relational data: LINQ commands can sort data regardless of its source. At the moment, it is impossible for the developer to accomplish this alone. Most of the time sorting and filtering operations are relegated to the database. LINQ will allow many of those operations to occur in the C# code where the .NET code is running instead of in the database where SQL Server is running. LINQ will work with LLBLGen Pro generated code and therefore will be of great benefit to the developer when it is finally released. In addition to LINQ, Microsoft is creating a project specifically aimed at relational data, called DLINQ. DLINQ is an O/R mapper for Microsoft that will generate classes corresponding to your databases tables and will use lazy loading (or deferred query execution, in Microsofts terminology) to dynamically generate SQL to retrieve, modify and delete data. The DLINQ project is simply a light-weight, Microsoft-built O/R Mapper. At the 2005 Microsoft Professional Developers Conference, Microsoft gave a demo of the project to attendees. Because the DLINQ project is still in development, it currently offers far fewer features than LLBLGen Pro and will probably require several more versions before it has a similar set of features to what LLBLGen Pro has today. The good news is that you do not have to wait for Microsoft to release WinFS, LINQ, or DLINQ in order to take advantage of O/R Mapping. LLBLGen Pro is already available and has proved to be a mature and stable product.
3) See http://msdn.microsoft.com/data/objectspaces.aspx. 4) See http://blogs.msdn.com/winfs/archive/2005/08/29/457624.aspx. The scope of WinFS is larger than just O/R Mapping, but its inclusion in WinFS demonstrates that it is a useful technique for handling relational data programmatically. 5) http://msdn.microsoft.com/netframework/future/linq/
10
Introduction
While these methods are acknowledged best practices, most developers find that they do not have the time to design the application this way; rules are violated frequently out of the need to just get the product out the door. However, LLBLGen Pro helps accomplish the ideal design in the following ways: LLBLGen Pro will completely auto-generate the data access layer. You do not need to do any hand-coding in this layer. LLBLGen Pro will optionally generate a business logic layer shell for you to get started with your business layer. You do not have to understand the intricacies of inheritance and making custom classes to take advantage of business logic classes; in LLBLGen Pro these classes are intuitively organized, powerful, and infinitely extendable. LLBLGen Pro collections and entity objects are bindable, making them extremely easy to add to .NET controls. This can reduce the size of your UI layer. Changing the database application and the data access layer without affecting the business logic and UI layers is actually possible with LLBLGen Pro. If you migrated your database schema and stored procedures to a new database application, refrained from using any database-specific features (there are not that many) and your schema matched exactly, you could actually re-generate your LLBLGen Pro data access layer and not make any other changes to your application!
Chapter 1
Inline SQL: This is the most common method of database access seen in most tutorials, but it is by far the worst method. Inline SQL is illustrated in Example 1.13. 1 2 3 4 5 6 SqlConnection conn = new SqlConnection(connection); SqlDataAdapter adapter = new SqlDataAdapter(); adapter.SelectCommand = new SqlCommand(SELECT * FROM Orders WHERE OrderID= + tbInput.Text, conn); adapter.Fill(dataset); return dataset;
Example 1.13. Inline SQL example
From a security standpoint, a user could enter any kind of data into an input field and have it concatenated directly into the SQL statement. This presents an alarming security risk. You do not want users executing arbitrary code in your database. Hackers who are familiar with SQL injection attacks6 can alter the above query to do about anything they want it to do. Also, from a maintainability standpoint, if you decide to change the name of the Orders table to Order, you do not have any way to propagate this change to your data layer. While the code will always compile correctly, it will throw an exception when the query executes and the Orders table is not found. Stored procedures: Most security experts recommend using stored procedures exclusively for database access. From a security perspective, stored procedures are immune to SQL injection attacks, since parameters entered into a stored procedure cannot change the query itself. From a maintainability standpoint, you do not have SQL strings inside of your code when you use stored procedures, but you will still have stored procedure names in your code. If you decide to rename a stored procedure, just like in the above example, you will not automatically propagate the stored procedure name change to your code. The code will compile correctly, but will throw an exception when it is executed and the stored procedure is not found. The stored procedure method is also a management challenge due to the difficulty of maintaining a vast library of stored procedures. The previous version of LLBLGen Pro relied on auto-generated stored procedures to carry out the SELECT, INSERT, UPDATE, and DELETE operations. For each table, there were 5 stored procedures generated (one extra for selecting all the records). In essence the auto-generated stored procedures and custom stored procedures create another layer of code that must be maintained. Maintaining code in SQL server can be much more difficult than in Visual Studio for several reasons: the lack of IntelliSense, a difficult debugging environment, cryptic (and unhelpful) error messages, and the inability to organize similar functions into logical objects. Yet another disadvantage of stored procedures is that most C# code that calls a stored procedure returns an untyped DataSet or DataTable. This again creates the problem of not knowing 100% of the time what kind of object will be inside each column and row. Although the code will compile, when executed it may throw exceptions if different types of data are in different places. From a permissions standpoint, stored procedures do offer tighter security. You can allow users access to only the stored procedures, and they would not be able to access the underlying tables to do any other kinds of changes (i.e., change the structure of the table). Setting permissions with dynamic SQL (discussed in the next section) is a little more complex because you need to set SELECT, INSERT, UPDATE, and DELETE permissions individually on the tables themselves. Another commonly cited advantage of stored procedures over other SQL statements is performance. The architecture of SQL Server is believed to speed up stored procedures by precompiling the query when it is created. However, MSDN documentation7 states the following: SQL Server 2000 and SQL Server version 7.0 incorporate a number of changes to statement processing that extend many of the performance benefits of stored procedures to all SQL statements. SQL Server 2000 and SQL Server 7.0 do not save a partially compiled plan for stored procedures when they are created. A stored procedure is compiled at execution time, like
6) For more information about SQL injection, see http://msdn.microsoft.com/msdnmag/issues/04/09/SQLInjection/ or query Google for SQL injection attacks. 7) See http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da_0nxv.asp or query Google for SQL Server and statement processing.
12
Introduction
any other Transact-SQL statement. SQL Server 2000 and SQL Server 7.0 retain execution plans for all SQL statements in the procedure cache, not just stored procedure execution plans. The database engine uses an efficient algorithm for comparing new Transact-SQL statements with the Transact-SQL statements of existing execution plans. If the database engine determines that a new Transact-SQL statement matches the Transact-SQL statement of an existing execution plan, it reuses the plan. This reduces the relative performance benefit of precompiling stored procedures by extending execution plan reuse to all SQL statements. The newer versions of SQL Server blur the lines between stored procedures and other SQL statements from a performance standpoint. The same query will be cached regardless of whether it is a stored procedure or a SQL statement. SQL statements also have the advantage of varying whereas the logic of a stored procedure is fixed. While changing the structure of the query means that a dynamic query will not take advantage of the caching feature the first time it is executed (but will on subsequent executions), the dynamic query has the capability of altering to fit the exact need. The SQL statement therefore could give the developer an opportunity to solve a problem more efficiently than an existing stored procedure might. Keep in mind, though, that neither method is better than the other all of the time; both techniques have strengths and weaknesses. The main point here is that stored procedures do not always have a strong performance advantage over SQL statements. The last disadvantage of stored procedures is that complex procedures can get very ugly very quickly. The ability to do a complex search with many different optional parameters, optional joins, and a variety of sorting methods would be exceptionally difficult with just one stored procedure. Dynamic SQL: The least common and probably least understood method of database access is Dynamic SQL. With this method, a custom component automatically generates the proper SQL statements for you as you use the object. Many developers do not even know this option exists, and understandably so. If you do not have time to code according to the best practices, you certainly will not have time to write your own component that generates SQL statements automatically. The good news is that you do not have to. The developers of LLBLGen Pro have made the components for you, and their components are smart enough to speak to a number of different databases. You can think of them as universal translators for databases. This frees you up to interface with the components in a generic, non-database-specific manner. If you follow the development of Internet applications, the transition to dynamic SQL will seem quite natural. Just as HTML and JavaScript code in many of todays web applications is not always written by hand, but instead encapsulated into ASP.NET server controls which generate the correct code automatically, so new objects are generating SQL to speak to databases, freeing the developer from having to write all this extra code manually. As the queries are generated, instead of concatenating SQL statements together, these queries become parameterized, just like stored procedures. This process also makes these statements immune to SQL injection attacks. Thus, with dynamic SQL, developers can get the security advantages of stored procedures without the hassle of maintaining another layer of code in your database. In addition, to help with maintaining your code, LLBLGen Pro generates the equivalent of a master index to all the objects in your database and all of their fields, saving you from having to hard-code strings in your code with table names, stored procedure names, and field names. These enumerators and other LLBLGen Pro objects allow you to consume data in a way that will not compile when the schema changes and you have refreshed the data access layer; therefore LLBLGen Pro helps prevent schema changes from causing unanticipated exceptions in your code when it is executed. Complex querying is also much easier with dynamic SQL than stored procedures. A dynamic query can grow to include joins where necessary or can alternatively use a variety of branching logic, optional parameters, and any possible sorting option in the same method. You have near-infinite flexibility with dynamic SQL, limited only by your own ingenuity and creativity. For database access, dynamic SQL is powerful, under-used and often misunderstood as a method of talking to your database. By the end of this book, you will be an expert user of this type of access and you will enjoy all of the advantages of this method over less efficient inline SQL and stored procedures methods.
Stored procedures do not always have a strong performance advantage over SQL statements.
13
Chapter 1
Summary
For summary, here are the reasons the combination of LLBLGen Pro, SQL Server 2005, Visual Studio 2005, and the associated techniques is so powerful: N-Tier design: Using N-Tier design is an effective way to break your code into layers that accomplish different tasks and is easily reusable. Customizable business layer templates: LLBLGen Pro generates templates for you to customize with you own code, so you do not have to create your own files, classes, or namespaces; you can start customizing immediately. LLBLGen Pro also generates classes to add your own custom validation logic as well. Reusable core code: If you decide to switch to a web application, you can use the same generated classes and business logic. All you need to do is change the UI-specific code. This book will show you how to develop so that the maximum amount of your custom code is reusable.
8) For more information about using nullable types in C#, see http://msdn.microsoft.com/vcsharp/2005/overview/language/nullabletypes/.
14
Introduction
No SQL or stored procedures required: You can write a complete database application without using ANY SQL statements or stored procedures. This way, your database does not become cluttered with millions of stored procedures. Best of all your code and logic is all in one place, not scattered throughout your classes and your database. Dynamic SQL generation: As discussed above, dynamic SQL is an efficient alternative to stored procedures and inline SQL. No data type conversion hassles: You do not have to worry about converting between SQL data types and .NET data types, or checking for nulls. The generated code does this for you automatically, though you have the option of specifically checking for nulls if you need to do so. Consistent and bug-free code: If twelve different programmers make your data-access code, they are probably not going to be 100% consistent with naming conventions and structure, not to mention humans are guaranteed to make mistakes. Generated code is 100% consistent and reliable. Strongly-typed code that allows fewer errors: Almost all the code generated for you is strongly-typed, which means that errors are discovered when the code is compiledbefore it is run. Lightweight and powerful code: While it is possible to generate strongly-typed DataSet objects in Visual Studio, LLBLGen Pro code is smaller, cleaner, and infinitely more powerful. Like DataSet objects, all generated Entity and EntityCollection objects are bindable. And with new features in Visual Studio .NET 2005, you do more binding visually. Easier and less error-prone schema changes: Schema changes are a hassle because it takes time to figure out how the changes will affect existing code and logic, both inside and outside the database. With the O/R Mapper, changes to the schema are updated into the generated code, and you immediately see where breaking changes have occurred. Database independent code: LLBLGen Pro abstracts your database from your application, so you are not locked into one database application. This flexibility means that you could switch to a completely different database with minimal to no impact on your custom code. If you stick to functionality common to all databases, you will not need to change a thing. Visual workspace: Much of the work in Visual Studio can be done visually, without having to code by hand. The visual nature of these tools saves you time and frustration, and the results are much more predictable. Visual Studio .NET 2005 adds even more visual features, and we will show you how to take advantage of this new functionality. Powerful pre-built controls: With every new version of Visual Studio, the controls get better and better. As you learn how to use Visual Studio 2005s bindable objects and DataGridView controls, you will be able to present your data more professionally and efficiently. Easy deployment: The newest version of Visual Studio has useful deployment options that make a Windows application as easy to deliver to clients as a web application. Client machines check for new versions of the software automatically, greatly reducing the burden on the developer.
In the next chapter, we will take a look at what an O/R Mapper is and how you use it to generate your data access layer and business layer.
15
Chapter 2
Contact CountryRegion PK,I2 I1 CountryRegionCode Name ModifiedDate CustomerAddress PK,FK3,I2 PK,FK1,I2 FK2 I1 CustomerID AddressID AddressTypeID rowguid ModifiedDate I2 AddressType PK,I3 I1 I2 AddressTypeID Name rowguid ModifiedDate FK7,U1 FK3 FK8,U2 FK9 FK1 FK2 FK4 FK5 FK6 PK,I3 SalesOrderHeader SalesOrderID RevisionNumber OrderDate DueDate ShipDate Status OnlineOrderFlag SalesOrderNumber PurchaseOrderNumber AccountNumber CustomerID ContactID SalesPersonID TerritoryID BillToAddressID ShipToAddressID ShipMethodID CreditCardID CreditCardApprovalCode CurrencyRateID SubTotal TaxAmt Freight TotalDue Comment rowguid ModifiedDate PK,I3 FK1,U1 I1 I2 Customer CustomerID TerritoryID AccountNumber CustomerType rowguid ModifiedDate PK,I2 ContactID NameStyle Title FirstName MiddleName LastName Suffix EmailAddress EmailPromotion Phone PasswordHash PasswordSalt AdditionalContactInfo rowguid ModifiedDate
U1
StateProvince PK,I4 I3 FK1,I3 I1 FK2 I2 StateProvinceID StateProvinceCode CountryRegionCode IsOnlyStateProvinceFlag Name TerritoryID rowguid ModifiedDate
Employee ShipMethod PK,I3 I1 ShipMethodID Name ShipBase ShipRate rowguid ModifiedDate PK,I4 I2 FK2 I1 FK1,U1 EmployeeID NationalIDNumber ContactID LoginID ManagerID Title BirthDate MaritalStatus Gender HireDate SalariedFlag VacationHours SickLeaveHours CurrentFlag rowguid ModifiedDate
Address PK,I3 Product PK,I4 I1 I2 ProductID Name ProductNumber MakeFlag FinishedGoodsFlag Color SafetyStockLevel ReorderPoint StandardCost ListPrice Size SizeUnitMeasureCode WeightUnitMeasureCode Weight DaysToManufacture ProductLine Class Style ProductSubcategoryID ProductModelID SellStartDate SellEndDate DiscontinuedDate rowguid ModifiedDate I2 I2 I2 FK1,U1,I2 I2 I1 AddressID AddressLine 1 AddressLine 2 City StateProvinceID PostalCode rowguid ModifiedDate
I2
I1
SpecialOffer SpecialOfferProduct PK,FK2,I2 PK,FK1,U1,I2 I1 SpecialOfferID ProductID rowguid ModifiedDate PK,I2 SpecialOfferID Description DiscountPct Type Category StartDate EndDate MinQty MaxQty rowguid ModifiedDate
FK3 FK4
SalesOrderDetail PK,FK1,I2 PK,I2 SalesOrderID SalesOrderDetailID CarrierTrackingNumber OrderQty ProductID SpecialOfferID UnitPrice UnitPriceDiscount LineTotal rowguid ModifiedDate
FK2 FK1
I1
FK2,U1 FK2
I3
I1
extra hassle. If you tend to be conservative with your schema, consider doing more normalization than you might ordinarily do. Remember that nothing can fix bad database design: Instead of assigning auto-numbering ID fields to every table, think through the data that the table will contain and use primary keys that will ensure you do not end up with duplicate data. A little foresight will save you from having to spend days cleaning up data due to poor schema design.
would represent one row in the Employee table. We can create a new row, retrieve an existing row, update a row, or delete it by interacting with this EmployeeEntity object. Each column in the Employee row is exposed as a property of the EmployeeEntity object. LLBLGen Pro refers to these properties as fields. Collections: For every entity class, LLBLGen Pro also creates an entity collection. An entity collection contains entity objects, just like a table contains rows. From the Employee table in the AdventureWorks database, LLBLGen would create an EmployeeCollection that holds EmployeeEntity objects. These collection classes eliminate the need for DataTable objects and are both strongly-typed and bindable. Instead of running a stored procedure and getting a DataTable, you can create criteria using the LLBLGen Pro framework and retrieve a collection of entities that match your criteria. Entity relationships: Built into each entity are all relationships in the database involving that table. These relationships help you navigate between related tables. Using an EmployeeEntity, you can immediately retrieve the related ContactEntity (using the 1:n relationship between the Employee table and the Contact table) or a AddressCollection (using the m:n relationship through the EmployeeAddress table). You will either get a single entity or an entity collection depending on the type of relationship. This built-in property saves you the trouble of navigating to that table and filtering out the unnecessary rows yourself. Typed views: Views in the database can be wrapped as strongly-typed DataTable objects. This means that LLBLGen Pro will create a new class that inherits from a .NET DataTable that will specifically define the contents of every column in that view. Typed views are read-only. A new feature in the latest version of LLBLGen Pro also allows you to add a view from the database as an entity as well as a typed view. We will discuss the differences between these two methods in later chapters. Typed lists: Typed lists are the only objects created by LLBLGen Pro that do not correspond one-to-one with database objects. When you generate your code, you have the option of creating your own strongly-typed lists of columns from either one table or multiple tables. We could, for example, create a list from the Employee and Contact tables, but only use the columns BirthDate, FirstName, and LastName. We could add the criteria Gender = F, and fill our typed list with only these rows. Typed lists are handy when you need very specific information that does not necessarily correspond to a single table/entity or you only want to grab a subset of information for a given set of tables. Like typed views, typed lists are read-only. Stored procedure caller classes: Stored procedures that you select will be wrapped in a layer of code, making them easy to access. This can make the migration to LLBLGen Pro much easier, as you can gradually wean yourself off unnecessary stored procedures without having to migrate all at once. While the parameters of the stored procedure are strongly-typed, remember that the result set is still an untyped DataSet. Despite this disadvantage, if you only used LLBLGen Pro to expose your stored procedures in your data access layer with a consistent naming scheme, it would still save you hours of development time. Now that we have a good idea of what items we can have generated with LLBLGen Pro, lets look at the schema scanning and code generating process.
19
Chapter 2
During this process, the application is reading your schema and saving the information into your project file. Once you have finished this scan, you do not have to be connected to your database in order to complete the code generation process. The only time you will need to connect to the database again is if you make schema changes and need to regenerate your code. Working in this disconnected manner is both convenient and expedient. Once the scanning process has completed, you should see a screen like the one in Figure 2.3. From here we will add objects from the database and configure them.
Creating Entities
The first set of items we need to add to our project are entities. We now need to tell LLBLGen Pro exactly which tables from our database we would like to use. (Figure 2.4) Right-click on Entities in the left bar, and select Add New Entities Mapped on Tables from Catalogs(s). The Designer lets you select from a list of all the tables in your database. At this step, you will generally want to add all the tables you think you might ever access. Only leave a table out if you have a specific reason not to include it or you know for certain you will never use it. When you select tables, LLBLGen Pro will examine all the relationships between the tables and create methods to navigate between the entities. The more tables you add, the more relationships LLBLGen Pro will findincluding relationships you had not anticipated! For this walk-through, select only the tables listed in Table 2.5. Adding tables is as simple as checking the boxes (Figure 2.6) and clicking Add to project when you are done. You will see the tables we just added under the Entities node (Figure 2.7).
Table Names Address AddressType Contact CountryRegion Customer CustomerAddress Employee EmployeeAddress Table Names Individual Product SalesOrderDetail SalesOrderHeader ShipMethod SpecialOffer SpecialOfferProduct StateProvince
Table 2.5. List of tables to add as entities from the AdventureWorks database
21
Chapter 2
Entity Options
Now lets take a look at all of the options that you can configure for every entity in your project.
22
Right-click on the Address entity and select Edit/Properties. From the entity edit screen (Figure 2.8) you will be able to set specific names for each property and relationship. Because each column in the table becomes an entity property with the same name, as long as you named your database fields well, you should not need to change anything here. This list will also tell you the database type of each column and the corresponding .NET type that the column will become in your class. LLBLGen Pro also detects whether or not the column is read-only, the length of the field, and whether or not it can be null. Through this crosschecking process, the data layer can catch some errors even before the data is saved in the database. Select the Fields on relations tab. (Figure 2.9) This screen shows a list of all the detected relationships for this entity and the name of the property that you use to access the related entity or entity collection. For each relationship, you will see the name of the relationship (field name), the two tables that the relationship exists between, and the type of relationship (1:1, 1:n, m:1, m:n). For this entity there are 14 different relationships! The name given to this field name/property is based on the name of the table on the other side of the relationship. For the first relationship in the list, the name is StateProvince. Notice that the 4th relationship in this list, ContactCollectionViaSalesOrderHeader, is a many-to-many relationship that we probably would not have created on our own. It traverses the SalesOrderHeader table and allows us to retrieve a collection of Contacts directly from an Address. These extra relationships are helpful additions that do not cost you any extra time, but that might come in handy later when you are trying to solve a specific problem. Select the Relations tab. The Relations tab allows us to look specifically at the relationship objects themselves (Figure 2.10). Having our relationships already defined in our database certainly makes our job easier since we do not have to define them manually.
PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
23
Chapter 2
This tab will be important if you want to use a particular relationship that does not exist in your database's schema; you can manually add new relationships here by clicking Add new custom 1:1/1:n/m:1 relation or Add new custom m:n relation.
Select the Fields on related fields tab. This tab allows you to have a field from a related entity visible as if it were part of this entity. For example, in the Address table, we have a foreign key that points to a row in the StateProvince table. Normally, if we want to know the name of the state or province we have to look it up. However, if we map the name of state or province as a field in our Address entity, we will have easy access to the value without traversing a relationship to another entity (Note: When using fields mapped to related fields it is important to use Prefetching, which is discussed in Chapter 10). Add the following fields on related fields to the Address entity from the StateProvince table by clicking on Add new, and selecting the correct field from the Mapped on field list: StateProvinceCode, Name, and CountryRegionCode (Figure 2.11). Also, make the following changes to the employee entity: From the main project screen, right-click on the Employee entity, and select Edit/Properties. Select the Fields on relations tab. Change the Employee field name to Manager. Change the Employee_ field name to Manages.
25
Chapter 2
Adding Views
Now, we will choose the views in our database to add to our project. For now, we will be working with one view primarily. As we mentioned earlier, with LLBLGen Pro, you can consume a view in two ways. First, you can add a view as a typed view. From the main project screen, right-click on Typed Views and select Add New Typed Views from Catalog. We are presented with a list of all the views in our AdventureWorks database (Figure 2.12) . Select vIndividualCustomer and click Add to project. Right-click on the vIndividualCustomer that was just added and select Edit/Properties. (Figure 2.13) The only items you will customize with a typed view are the name of typed view and the field names. If these items were poorly named in the database, correct them here. Change the name of the view to CustomerView. The other way of using a typed view is to add it as an entity. By adding it as an entity, you will end up with a collection instead of a typed DataTable. You also have the capability of adding relationships to other entities, which will make filtering rows easier. Lets add the same view as an entity as well. Right-click on Entities and select Add New Entities Mapped on Views From Catalog(s) (Figure 2.14). Select the vIndividualCustomer view and select Add to project. Right-click on the new vIndividualCustomer entity and select Edit/Properties. Change the name of the entity to CustomerViewRelated.
26
On the Entity Fields tab, select the CustomerID row, and check the boxes for Is Readonly and Is part of the primary key (Figure 2.15). On the Relations tab, click the Add new custom 1:1/1:n/m:1 relation (Figure 2.l6). Select Primary key side, and select SalesOrderHeader as the related table. Make sure both the Primary Key field name and the Foreign Key field name are displaying CustomerID. Adding the relation will make it possible to retrieve a row in this view directly from an order in the SalesOrderHeader table.
Figure 2.14. Adding a view as an entity PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
27
Chapter 2
Figure 2.15. Entity field configuration screen for view mapped as an entity
(Figure 2.17) You are presented with a list of all the entities in your project. From the list of entities, you can choose one or more entities to add to your typed list. You will then edit the entire list of columns from all the tables you have selected in order to include only the columns that you want to appear in your typed list. Select the Customer entity, and click the Add>> button.
Table.Field Name Customer.CustomerID Contact.Title Contact.FirstName Contact.MiddleName Contact.LastName Contact.Suffix Contact.Phone Contact.EmailAddress Contact.EmailPromotion AddressType.Name Address.AddressLine1 Address.AddressLine2 Address.City StateProvince.Name Address.PostalCode CountryRegion.Name (same) (same) (same) (same) (same) (same) (same) (same) (same) AddressType (same) (same) (same) StateProvinceName (same) CountryRegionName Field Alias
Table 2.18. List of columns in CustomerList typed list PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
29
Chapter 2
You can only add entities to the list that are related. Now that you have chosen one entity, the other unrelated entities in will be grayed-out. These tables will define the source from which we will select the columns we want. In the order that they are listed, add the following tables: Individual, Contact, CustomerAddress, Address Type, Address, StateProvince, CountryRegion. Now we can select the specific columns that we are interested in. We will be choosing columns in such a way that the result will be similiar to the columns in the vIndividualCustomer view. Click on the Fields mapped on entity fields tab, and select the columns according to Table 2.18, making name changes where indicated. As you check the box next to the list of available field names, they are added to the collection at the bottom of the form (Figure 2.19). These name changes are necessary since there are multiple fields with the column name name, and each field name in the typed list must be unique.
That is all there is to creating a typed list. Our effort will produce a strongly-typed DataTable with these 16 columns. We can create any criteria that we like when we use this list, but the rows that are returned will always contain these 16 columns. Because we know the exact schema of the result set, a typed list gives us an advantage over a stored procedure. No surprises at run-time!
30
We have finished adding database objects to our LLBLGen Pro project and now we are ready to generate our code. Before doing so, lets look at the different code generation options available.
In the above example, as soon as the entity in instantiated in Line 1, all the data is fetched from the database and loaded into the object. In Line 2, we can immediately begin using or updating the entitys properties. To save the changes back to the database, we simple call the Save() method in Line 3. The Adapter template group uses an object called a DataAccessAdapter to interact with the database. Instead of database queries happening behind the scenes automatically, every database transaction will be explicitly called through the DataAccessAdapter object. In this way, the DataAccessAdapter is similar to the built-in TableAdapter object in .NET. With this adapter, you have more control over the database connection and can choose when to open it, how long it remains open, and when to close it.
PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
31
Chapter 2
The Adapter template group is more like a classroom, where the teacher is firmly in control and tells the students exactly what to do. Students are not empowered to simply take care of themselves. The students follow the teacher's specific instructions. The Adapter template group has the advantage of fine-grained control over every database query. In addition, it also supports multiple database types and multiple databases. If your data needs are complex, the Adapter template group might be your only option. The Adapter template organizes code around an object that provides data services and is independent of the entity objects themselves. In Example 2.22, we use an Adapter code to fetch an order. 1 2 3 4 5 DataAccessAdapter Adapter = new DataAccessAdapter(); ContactEntity MyContact = new ContactEntity(34); Adapter.FetchEntity(MyContact); MyContact.FirstName = tbFirstName.Text; Adapter.Save(MyContact);
Example 2.22. Adapter code example
With Adapter code, we can instantiate the Entity in Line 2, but until we call the FetchEntity() method in Line 3, the data will not be loaded into the object from the database. We must use the DataAccessAdapter to read and update the data (Line 5). If none of the templates meet your needs, you also have the ability to create your own templates or modify the existing templates to include your custom code.9 To summarize, here is an overview of each pre-built template: Self-Servicing Bundles persistence inside entity objects. Allows data to load itself automatically as it is needed without explicit commands. Does not require extra objects to perform data access. Only works for a single database type and a single catalog. Can be used easily and intuitively. Adapter Exposes persistence as a service. Allows finer database control. Each database query is explicit. Can target multiple databases and catalogs. Requires an extra object and a few more lines of code.
If you use the Self-Servicing template, another decision you will need to make is whether to use a one-class scenario or two class scenario. In a general scenario (one class), only one set of entity classes are created. If you need to extend the generated framework, you can place your custom code inside of the generated classes between special markers to ensure that it will not be overwritten if the code is regenerated. Outside of using special markers, your only other option would be simply to create your own set of custom classes that use the generated framework. You might create an AddressManager class that consumes all the entities associated with placing an order (Figure 2.23). A better option in certain situations is to use a two-class scenario. In this case, two sets of entity classes are created. The first type is known as the base type and is exactly the same as the entity class in the one-class scenario, but with a different name. The extra entity class is an empty class that inherits from the base class. You can use this shell as an instant business layer for all of your custom code. This class is kept completely separate from the generated code and will never be overwritten when the code is regenerated, so you will not lose your changes. Figure 2.24 is an
9) While template modification is outside the scope of this book, LLBLGen Pro customers can freely download the LLBLGen Pro Software Development Kit (SDK) from Solutions Design to get tools and instructions for creating templates and adding custom code to existing templates.
32
example of a two class scenario. In this setup, you can place your business logic inside the AddressEntity object, instead of writing your own manager classes. Entity classes inherit all the functionality of the base classes, but are never overwritten when the code is regenerated. The two class scenario is highly recommended to fulfill the Ntier best practice in the least amount of time.
Figure 2.24. Two Class scenario diagram PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
33
Chapter 2
In the Destination root folder option, choose a location to save the generated code. (Recommended: My Documents/Visual Studio/Projects/AW.Data) Be sure that you use an empty folder, as LLBLGen Pro will create many new folders and files. Using the same folder for two different LLBLGen Pro projects is not possible without overwriting some of the other projects data. The default location is My Documents/LLBLGen Pro Projects. A better option may be to save your project in the default folders of Visual Studio (My Documents/Visual Studio/Projects) within another folder specifying the name of your particular project. Click Start generator to begin the code generation process. You will receive a screen like Figure 2.26 when the process is finished, summarizing the generation tasks that were completed. Each file that was created by the program is listed here. In return for the small amount of configuration, we get a lot of useful code! You can now save your LLBLGen Pro project and close the application. When you navigate to the folder where you created your project, you should see something like Figure 2.27. We will take a look at how to use the specific output files in the next chapter. For now, you have successfully generated your first LLBLGen Pro project. Now, take a break, get a fresh cup of coffee, and congratulate yourself! You have just saved yourself weeks, if not months of development time!
Further Suggestions
Check your newly generated files into source control or find an easy way to back up the files. You can always right-click on the folder and select Send To > Compressed (Zipped) Folder to save the code in a zipped file. Add the date to the zipped file's name so you can remember when the copy was made. Inside LLBLGen Pro, go to File > Preferences (Figure 2.28), and review the preferences you can configure for the LLBLGen Pro project. Note the Preferred generator configuration option which will allow you to choose a default template for the code that you generate. Set it to SelfServicing, two class scenario (Base classes only). Also, I recommend changing the Preferred project folder and the Preferred destination root folder to My Documents\Visual Studio 2005\Projects.
PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
35
Chapter 2
Even if your database is well organized with clear table and columns names, you will still need to spend time naming the relationships between your entities according to their function (like we did with the Employee entitys relationships). Because LLBLGen Pro detects a large number of relationships, some of the default names will not be intuitive.
In the next chapter, we will begin setting up our Visual Studio solution and project files.
36
Solution Setup
Solution Setup
Plans are only good intentions unless they immediately degenerate into hard work.
Peter Drucker (1909-2005)
Chapter Goals Create your Visual Studio solution and project files. Add the necessary references to compile your code. Configure the App.config file. Build your solution for the first time. Acquaint yourself better with the files and folders of your generated code. Before we begin, we will need to set up all the major project files for our application. The organization and structure of a normal Visual Studio project can be confusing, so we will walk through each part step-by-step.
Open Visual Studio .NET 2005 and go to File > New > Project. (Figure 3.1) Choose Visual C# > Windows on the left, and Windows Application on the right.
PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
37
Chapter 3
In the Name field, enter AW.Win. Keep the default location, inside My Documents/Visual Studio 2005/Projects. Make sure Create directory for solution is checked. Enter AW as the solution name. For those new to Visual Studio, a solution is your master file. Each solution can contain multiple projects. A project can be a C# Class Library, C# Windows Application, or a C# Web Application. In our case, we will have two main projects: AW.Data, the generated data/business layer, which is a C# Class Library Project; and AW.Win, a C# Windows Application project. You should now have the solution and your first project loaded. Inside Visual Studios Solution Explorer window, you should see a list similar to Figure 3.2.
Now we need to add the AW.Data project containing our LLBLGen Pro code to our solution. Go to File > Add > Existing Project. Navigate to the folder containing your generated code, and click Open.
Looking at Solution Explorer again, you should see a solution tree similar to Figure 3.3.
38
Solution Setup
Adding references
Now we need to add some references to the AW.Win project to enable use of our LLBLGen Pro generated code. First, we need to add a reference to the AW.Data project. This will enable us to use the output of the AW.Data class library. In Solution Explorer on the AW.Win project, right-click References and select Add > Reference. (Figure 3.4) Select the Projects tab, and highlight AW.Data. Click OK.
Also we will need to add a reference to some common LLBLGen Pro libraries. These libraries, which were installed when you installed LLBLGen Pro, contain classes you will use as you make your dynamic queries. . If another developer will be using this solution, he or she either needs to install LLBLGen Pro on his or her individual machine, or have these DLL files available. Right-click References again and select Add > Reference. On the .NET tab, add LLBLGen Pro .NET 2.0 ORM Support Classes library (SD.LLBLGen.Pro.ORMSupportClasses.NET20.dll) and LLBLGen Pro .NET 2.0 Dynamic Query Engine for Sql Server (7/2000/2005/MSDE (SD.LLBLGen.Pro.DQE.SqlServer.NET20.dll). Your AW.Win projects references should now look like Figure 3.5.
Figure 3.5. AW.Win project references PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
39
Chapter 3
App.config settings
When LLBLGen Pro needs to read information from the database, it uses a connection string to figure out where to find the database and which credentials to use. It is always a best practice to put this string in exactly one place in your application. That way, you can change it easily should the need arise (and it will). The connection string is normally placed into a special XML configuration file called the App.config file. If you look in your AW.Data project, you will see one that LLBLGen Pro created for you. <?xml version="1.0"?> <configuration> <appSettings> <add key="Main.ConnectionString" value="data source=jchancellor2-nb;initial catalog=AdventureWorks; integrated security=SSPI;persist security info=False"/> </appSettings> </configuration>
Example 3.6. App.config file contents
Double-click on the App.config file in the AW.Data project to open it for editing (Example 3.6).
Solution Setup
Notice the Main.ConnectionString key. You can change the name of the key that the class library references in the property settings of your LLBLGen Pro project when you generate your code. The value part of the key contains the connection string. Whenever possible, using integrated security is preferable to using a SQL username and password. One reason for this decision is that you do not want to put any usernames or passwords in plain text where someone could easily steal your credentials and gain access to your database. With integrated security, the connection string reveals little useful information to a malicious user. By default in a .NET Windows application, our program will execute under the context of the Windows user who is logged in to the computer. When connecting to the database using integrated security, Windows passes along this context to the database for authentication. This means that for every user who runs our Windows program, either the individuals Windows account or a security group to which the user belongs will require permission to use the tables in the SQL database that our program accesses. Your unique situation will determine what kind of security setup you use. Go ahead and change your connection string if you need to use different credentials than the ones that you used to scan your schema. After making any changes, we can now copy the App.config file from the AW.Data project to the AW.Win project. Every project that references the AW.Data project will need an App.config file available with the connection string inside. In Solution Explorer, right-click on the App.config file in the AW.Data project and select Copy. Right-click on the AW.Win project and select Paste. Now Solution Explorer should look like Figure 3.7. Now that we have made all the necessary changes to our solution, lets compile it for the first time. Right-click on your solution in Solution Explorer and select Build. You should receive a Build Successful message in the status bar and the Output window (Figure 3.8).
Chapter 3
FOLDERS
CollectionClasses: This folder contains the classes of all your strongly-typed collections. You will not modify anything inside this folder. DaoClasses: This folder contains objects that manage database-related options behind the scenes. For instance, these classes take entity objects and use the dynamic query engine to make the SQL statements, as well as perform the actual query. EntityBaseClasses: These are the data-layer entity objects. All of these entities have a suffix of -base in their names. You will not ever call these objects directly or change anything in this code. EntityClasses: This folder contains your business object shells. You will spend most of your time here extending these classes. If you want to save your work, be sure to make a copy of this folder! FactoryClasses: These classes contain factories which will help you create criteria for queries and objects that will define how you want items sorted. We will be using these classes, but not modifying them. HelperClasses: These classes are called by the data layer. These classes create connections to the database, provide transaction support, and define the default values of types, among other actions. If you want to modify how your connection string is read when a database connection is opened, you can modify DBUtils.cs. You can also modify TypeDefaultValue.cs, if you would like to specify different default values for .NET types. For instance, if you would rather have a null integer return the minimum possible integer value instead of 0, you can make that change here. If you do make changes here, remember to back them up, and to set the read-only flag to true, as these files are normally overwritten by the generator. RelationClasses: These classes contain definitions for all of the relations that exist between each entity. You will never modify these classes. StoredProcedureCallerClasses: These classes expose the stored procedures that we selected. They are defined in only two files: ActionProcedures.cs and RetrievalProcedures.cs. If you want more procedures accessible in your code, do not add them here manually! It is better to add them to your LLBLGen Pro project and regenerate your code. TypedListClasses: This folder contains all the typed lists we added to our project. Every typed list will have its own class in its own file. Again, if you would like a new one, make it in your LLBLGen Pro project. TypedViewClasses: Like the typed lists, typed views are inside this folderone per file. Add new views in your LLBLGen project. ValidatorClasses: Validator classes are shells that LLBLGen Pro generates to help you implement field validation. These classes allow you to accept or reject changes when the properties of your entities are changed. Each entity has its own validator class in its own file. If you choose to take advantage of field validation, these files will be heavily modified.
FILES
There are two more files in the root of your AW.Data project that you should be aware of: AssemblyInfo.cs: If you want to add more information about your assembly, such as your companys name or version information, you can add it to this file. ConstantsEnums.cs: This file contains an index of every field in every table/entity, typed list, and typed view in your project. Instead of typing in a static string as you work with your objects, you need to always use these enumerators. This index is regenerated when your code is regenerated. If you consistently reference this index, you will notice when your schema changes. In the next chapter, we will begin building the first form of our Windows application.
42
MDI Parent
Our application will eventually have many forms to search and input data, so we will use the multiple-document interface features of Visual Studio. The first form that we will create will hold all of the other forms we will design in later chapters. This main form is known as the MDI parent. Because we want to give good, descriptive names to all of our objects in the project, we need to delete the form that was created for us and add our own form that will become the new MDI parent. In the AW.Win project, delete Form1.cs. Right-click on the AW.Win project and select Add > New Item. Choose Windows Form and give it the name frmMain.cs. Open the Program.cs file. Change the Main method to the call our new frmMain form when the application starts (Example 4.1). 1 static void Main() 2 { Application.EnableVisualStyles(); 3 Application.SetCompatibleTextRenderingDefault(false); 4 Application.Run(new frmMain()); 5 6 }
Example 4.1. Updated Main() method
The Main() method in the Program.cs file specifies which form will start when the application launches. Line 5 is the only line that needs to change. We also want to change some of the properties of the frmMain object to make the form a little more attractive.
PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
43
Chapter 4
While selected on frmMain, look at the Properties Window, and change the following properties of the form to the values listed in Table 4.2.
Property IsMdiContainer StartPosition Text WindowState True CenterScreen Adventure Works Maximized Value
Setting the IsMdiContainer to true will allow us to add other forms to this form as MDI children. As you set this property, you will notice the background and border change slightly to indicate that it is now a container for other forms (Figure 4.3).
Now we will build the menu structure for our application. Drag a MenuStrip object on the form and add menus and sub-menus to give you the structure in Figure 4.4. The & symbol will underline the next letter in the menu name and allow users to press alt + that letter to activate the item.
Select the MenuItem you created that is labeled Window. Verify that the name of this object is windowToolStripMenuItem. Select the MenuStrip object (the entire menu object that contains all the other menus), and set the MdiWindowListItem property to the windowToolStripMenuItem. By setting the MdiWindowListItem property, any MDI child forms of frmMain will automatically be added to the Windows menu and removed when the child form is closed. In previous versions of Visual Studio, you would have to code these actions by hand. Now we will create an event handler that will close the application when the user clicks the Exit menu. Double-click the Exit menu object. Visual Studio should create the event handler for the menus Click event, and switch to code view. Add the code in Line 3 to the exitToolStripMenuItem event handler (Example 4.6). 1 2 3 4 private void exitToolStripMenuItem_Click(object sender, EventArgs e) { this.Close(); }
Example 4.6. Menu exitToolStripMenuItems Click event handler
Now we have a fully functioning MDI parent form that we can use as a launching pad for our other forms. Lets compile and test the application to see how it functions. Press F5 to compile and launch the application. At this point, the only actions you can perform in the application are to browse the menus and close the program. It is not much, but it is a great start! Go ahead and exit the application, and we will add a few more forms before we finish this chapter.
Property FormBorderStyle ShowInTaskbar StartPosition Text FixedToolWindow False CenterParent (frmOrderSearch) Order Search (frmOrderEdit) Order Edit (frmCustomers) Customer List (frmOrganization) Adventure Works Organization (frmVacationBonus) Vacation Bonus Utility Value
Table 4.7. Properties to change on 4 new child forms PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
45
Chapter 4
Add four new Windows forms named frmOrderSearch, frmOrderEdit, frmCustomers, frmOrganization, and frmVacationBonus. Set the properties listed in Table 4.7 for all four of the new windows you just created. Now we will create event handlers for the appropriate menus and launch the corresponding form. Switch to the design view of frmMain and double-click on all the menus to automatically create the Click event handlers for each one. Add this section of code from Example 4.8 below the code you added in Example 4.6. Remember that double-clicking on the menu will both create the menus Click event handler and associate that event handler with the menu item in the menu strip . If you add the code manually in code view, make sure you also set the menus Click event to point to the correct event handler. Otherwise, the event handler will not run when you click the menu. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 public void LaunchChildForm(Form ChildForm) { ChildForm.MdiParent = this; ChildForm.WindowState = FormWindowState.Normal; ChildForm.Show(); } private void ordersToolStripMenuItem_Click (object sender, EventArgs e) { LaunchChildForm(new frmOrderSearch()); } private void customersToolStripMenuItem_Click (object sender, EventArgs e) { LaunchChildForm( new frmCustomers()); } private void organizationToolStripMenuItem_Click (object sender, EventArgs e) { LaunchChildForm(new frmOrganization()); } private void vacationBonusUtilityToolStripMenuItem_Click( object sender, EventArgs e) { LaunchChildForm(new frmVacationBonus()); }
Example 4.8. MenuItems Click event handlers and LaunchChildForm() subroutine
In the code above we simply created event handlers for our three menu options. Each handler instantiates the correct form and calls a subroutine, passing it the newly created form. In Line 3, the subroutine sets the MDIParent property to the current form (in this case frmMain), makes sure the WindowState property of the form is Normal (not Maximized or Minimized), and then displays it to the user. When you compile and run the application now, you should be able to bring up each one of the blank forms and get a list of all the open windows under the Window menu (Figure 4.9). But what happens if you select the same menu item more than once? We will make one final improvement to this parent form before we move on. The way our code is written in Example 4.8, Lines 18-23, if a user selects a particular
46
menu option multiple times, we will create multiple copies of the form. Lets rewrite the subroutine so that it checks to see if any forms of that type are already open, and if so, brings the existing form to the front. Modify the LaunchChildForm() subroutine we created in Example 4.8 to match Example 4.10. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 public void LaunchChildForm(Form ChildForm) { bool FormAlreadyExists = false; foreach (Form myForm in this.MdiChildren) { if (myForm.GetType() == ChildForm.GetType()) { FormAlreadyExists = true; ChildForm = myForm; break; } } if (FormAlreadyExists == true) { ChildForm.BringToFront(); } else { ChildForm.MdiParent = this; ChildForm.WindowState = FormWindowState.Normal; ChildForm.Show(); } }
Example 4.10. Updated LaunchChildForm() subroutine
47
Chapter 4
In Lines 4-12 we loop through all the forms in the MDIChildren array, searching for any that are the same type as the form that was passed to us. If we find one, we set a flag to true, assign the existing form to our form variable and exit the loop. In Line 13, if the flag has been set, we bring the child form to the front. If the flag is false, we assign the form to be a child of the this form, set the WindowState property, and display it to the user. In this chapter we set up the basic shell of our application. In the next chapter, we will begin configuring some simple forms to display the results of stored procedures, typed views, and typed lists.
48
Simple Forms
Simple Forms
The secret to creativity is knowing how to hide your sources.
Albert Einstein (1879 - 1955)
Chapter Goals Execute two stored procedures and display the results. Extend the business logic layer with a simple query. Learn to use SortExpression and SortClause objects. Recreate the logic of the stored procedure using entity objects. Consume a view as a typed view and as an entity. Use Relation objects to make joins between tables. Format DataGridView columns. Learn to use a typed list. We are now ready to use our LLBLGen Pro classes and create a few functional forms. We will begin with a stored procedure since it is one of the easiest database objects to use and understand.
49
Chapter 5
Properties to set Text: Search Dock: Top (Send to back) Dock: Fill (Send to Front)
The stored procedures that we will be using are uspGetEmployeeManagers and uspGetManagerEmployees that we added in Chapter 2. Figures 5.3 and 5.4 show the sample output for those stored procedures based on entering a parameter of 50.
We would like to take these results and display them in the TreeView control. Notice that in both cases, we have to take a flat table and convert it into a meaningful hierarchy. This will involve some recursion, which can make the code difficult to read. Try not to get bogged down in the recursion, but instead focus on how a stored procedure is accessed and the data is used. Switch to code view, and add the code in Example 5.5 to the top of the class to import namespaces. 1 2 3 4 5 using using using using using AW.Data; AW.Data.EntityClasses; AW.Data.CollectionClasses; AW.Data.HelperClasses; SD.LLBLGen.Pro.ORMSupportClasses;
We will use these namespaces throughout the code in this book, so do not forget to import them on other forms. The AW.Data namespace contains all the generated classes. The AW.Data.EntityClasses contains all the tables we have mapped as entities. The AW.Data.CollectionClasses contains all the collection classes that hold specific types of entities. The AW.Data.HelperClasses namespace allows us to write natural language queries. And finally, the OrmSupportClasses namespace contains common LLBLGen Pro objects. Now, add the subroutine in Example 5.6 to frmOrganizations in code view.
50
Simple Forms
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
private void CreateNode(DataRow row, TreeNodeCollection Nodes) { TreeNode Node = new TreeNode(); Node.Text = row["LastName"] + ", " + row["FirstName"] + " [" + row["EmployeeID"] + "]"; Node.Tag = row["EmployeeID"]; Nodes.Add(Node); DataRow Children = GetChildRow(row, row.Table); if (Children != null) CreateNode(Children, Nodes[0].Nodes); } private DataRow GetChildRow(DataRow row, DataTable table) { if (!(table.Rows.IndexOf(row) - 1 < 0)) { return table.Rows[table.Rows.IndexOf(row) - 1]; } else return null; }
Example 5.6. Private methods CreateNode() and GetChildRow()
The CreateNode() method takes a DataRow and a TreeNodeCollection. It creates a node based on the information in the row and adds it to the Nodes collection. In Line 8, we call the GetChildRow() method to get the next item in the list, and in Lines 9-10, call the CreateNode() recursively on the child collection of the newly created node. The GetChildRow() method finds the index of the provided row and returns the next one in the table. We are assuming the rows will be in order from lowest in the hierarchy to highest, and we will work in reverse order from the highest. Also, add the private method in Example 5.7. 1 private TreeNode FindEmployeeRecursive( int EmployeeID, TreeNodeCollection Nodes) 2 3 { TreeNode FoundNode = null; 4 foreach (TreeNode Node in Nodes) 5 { 6 if (FoundNode != null) 7 return FoundNode; 8 if (Convert.ToInt32(Node.Tag) == EmployeeID) 9 { 10 FoundNode = Node; 11 break; 12 } 13 else 14 { 15 if (Node.Nodes.Count > 0) 16 { 17 FoundNode = FindEmployeeRecursive( 18 EmployeeID, Node.Nodes); 19 } 20 } 21
PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
51
Chapter 5
22 23 24 }
} return FoundNode;
Example 5.7. Private method FindEmployeeRecursive()
The FindEmployeeRecursive() method searches through the TreeNodeCollection provided and finds the node that corresponds to the EmployeeID provided. Finally, create an event handler for the btnSearch button by double-clicking it in design view. Add the code in Example 5.8 to the buttons Click event handler. 1 private void btnSearch_Click(object sender, EventArgs e) 2 { tvOrganization.Nodes.Clear(); 3 TreeNode MasterNode = new TreeNode(); 4 int EmployeeID = Int32.Parse(this.cbEmployee.Text); 5 DataTable Managers = AW.Data.StoredProcedureCallerClasses. 6 RetrievalProcedures.UspGetEmployeeManagers(EmployeeID); 7 DataTable Managees = AW.Data.StoredProcedureCallerClasses. 8 RetrievalProcedures.UspGetManagerEmployees(EmployeeID); 9 if (Managers.Rows.Count > 0) 10 { 11 TreeNodeCollection ManagersCol = new TreeNode().Nodes; 12 CreateNode(Managers.Rows[Managers.Rows.Count - 1], 13 ManagersCol); 14 TreeNode CEONode = new TreeNode(); 15 DataRow CEORow = Managers.Rows[Managers.Rows.Count - 1]; 16 17 CEONode.Text = CEORow["ManagerLastName"] + ", " 18 + CEORow["ManagerFirstName"] 19 + " [" + CEORow["ManagerID"] + "]"; 20 CEONode.Tag = CEORow["ManagerID"].ToString(); 21 CEONode.Nodes.Add(ManagersCol[0]); 22 MasterNode = CEONode; 23 } 24 foreach (DataRow row in Managees.Rows) 25 { 26 TreeNode Manager = FindEmployeeRecursive( 27 Convert.ToInt32(row["ManagerID"]) 28 , MasterNode.Nodes); 29 TreeNode Employee = new TreeNode(); 30 Employee.Text = row["LastName"] + ", " 31 + row["FirstName"] + " [" + row["EmployeeID"] + "]"; 32 Employee.Tag = row["EmployeeID"]; 33 Manager.Nodes.Add(Employee); 34 } 35 tvOrganization.Nodes.Add(MasterNode); 36 tvOrganization.Nodes[0].ExpandAll(); 37 }
Example 5.8. Button btnSearch Click event handler
52
Simple Forms
When the Search button is clicked, we clear out all the nodes in the tree view. In Line 4, we create a TreeNode to hold the nodes we will be creating. In Line 5, we create an integer from the information that was entered in the ComboBox. In Lines 6-9, we call both stored procedures and catch the results in DataTable objects. Note that each stored procedure call is one line of code! LLBLGen Pro takes the work out of wrapping the stored procedure in code ourselves, and you have the benefit of IntelliSense as you type the name of the procedure. In this way, you do not have to worry about entering the name as a hard-coded string yourself, and possibly typing it wrong. With the results in hand, if we have more than one manager returned, we start for the last row (the highest manager), and call the CreateNode() method, which will recursively add the subordinate managers. Because the stored procedure does not actually return the highest manager (the C.E.O) as a separate row, we have to add it ourselves in Lines 15-21, if we would like it displayed. In Line 24, we loop through all the rows in the results of the other stored procedures that return everyone below our EmployeeID. For each row, we find the manager node for that employee by calling FindEmployeeRecursive() method in Lines 26-28, and add a new employee node to the Nodes property of the manager node. To finish, we add add the main node to the TreeView control in Line 36, and expand all the nodes in Line 37. Compile and run the application (press F5). Select Reports > Organization from the menu. You should be able to enter 158 in the ComboBox, click Search, and get results like that in Figure 5.9.
The hardest part of that code was not accessing the stored procedure, but formatting it for the TreeView! Now with each EmployeeID that is entered, we get a path to the top of the organization and a path to the bottom. Note that this code will not work on EmployeeID 109, the CEO, due to the way the stored procedure returns managers. But enter anyone else and you will find his or her position within the company. Lets review the strengths and weaknesses of this particular approach. First, it was incredibly easy to use the stored procedure logic and pass it a parameter. LLBLGen Pro automatically detects the parameters for youmore logic you do not have to code yourself. Unfortunately, the results of the stored procedure are a generic DataTable. Our underlying problem is that we do not have any clues in our data layer that reveal the schema of the results to us. That is why we needed to give an example of the results to you, so you can see the names of the columns and know the structure of the data we are processing. IntelliSense cannot help in this situation. This causes us to have to put in around 13 hard-coded strings of column names in our code (several examples in Line 17-20 of Example 5.8) in order to access the correct values from the DataTable. These strings contribute to the brittleness of our code and greatly increase our maintenance costs in the future. We have no way of knowing if the ManagerID column will be there or not until we run our code. In addition, we have no easy way of updating this code should our stored procedure change. If the columns do not exist or their names change, the application will still compile and the fields will be blank at best or throw exceptions at worst. The other disadvantage of this approach is that the stored procedure can only return a flat table of data to us, even through the data that the results represent has a hierarchical structure of parent and child rows. We thus have to do extra work in order to reconstruct the data into the structure it originally had in the database. Lets attempt a different solution using some of the built in features of the generated framework.
53
Chapter 5
The FindLowestNode() method finds the lowest child node by looking at the first node in each collection. This is helpful when adding managers. The MakeNode() method takes an EmployeeEntity, and creates a node with the name of the employee displayed and the EmployeeID saved in the Tag property. Also add these two private methods in Example 5.11. 1 2 3 4 5 6 7 8 9 10 11 12 13 14
54
private TreeNode GetManagersRecursive(EmployeeEntity Employee) { TreeNode EmployeeNode = MakeNode(Employee); if (Employee.ManagerId != 0) { TreeNode ManagerNode = GetManagersRecursive(Employee.Manager); FindLowestNode(ManagerNode).Nodes.Add(EmployeeNode); return ManagerNode; } return EmployeeNode; } private TreeNode GetEmployeesRecursive(EmployeeEntity Employee) { TreeNode EmployeeNode = MakeNode(Employee);
PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
Simple Forms
15 16 17 18 19 20 21 22 23 }
The GetManagersRecursive() method finds all the managers for a given employee entity, adding each one as a node. The GetEmployeesRecursive() method finds and adds all the employees that are managed by a given employee. Both are recursive, and will navigate all the way up to the CEO, and all the way down to the bottom. Before we can run this code, we have to add a new Click event handler for btnSearch. Add the code in Example 5.12 to the btnSearch Click event handler. 1 private void btnSearch_Click(object sender, EventArgs e) 2 { tvOrganization.Nodes.Clear(); 3 TreeNode MasterNode; 4 EmployeeEntity Employee = new EmployeeEntity( 5 Int32.Parse(cbEmployee.Text)); 6 TreeNode EmployeeNode = MakeNode(Employee); 7 if (Employee.Manages.Count > 0) 8 { 9 foreach (EmployeeEntity Subordinate in Employee.Manages) 10 { 11 EmployeeNode.Nodes.Add(GetEmployeesRecursive(Subordinate)); 12 } 13 } 14 if (Employee.ManagerId != 0) 15 { 16 TreeNode ManagersNode = GetManagersRecursive(Employee.Manager); 17 FindLowestNode(ManagersNode).Nodes.Add(EmployeeNode); 18 MasterNode = ManagersNode; 19 } 20 else 21 { 22 MasterNode = EmployeeNode; 23 24 } 25 MasterNode.ExpandAll(); 26 tvOrganization.Nodes.Add(MasterNode); 27 }
Example 5.12. Modified button btnSearch Click event handler
We begin by clearing out the nodes in Line 3. In Line 4, we create a node that will serve as the root of all the other nodes. In Line 5, we create an instance of the employee entity with the ID that was entered into the ComboBox. Then we loop through all the subordinate employees, calling the GetEmployeesRecursive() method on each employee entity and adding the resulting node to our node collection in Lines 8-14. In Lines 15-20, we do the same for each manager.
PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
55
Chapter 5
If there are no managers, our employee entity becomes the root node in Lines 21-24. Then we expand all the nodes, and add the root node to the TreeView control. Lets give it a try! Compile and run the application, and select Reports > Organization. Try entering a few EmployeeIDs like 12, 50, and 109. Now that we are using the entity objects instead of the stored procedure we can enter the CEOs ID (109), and get a complete list of everyone in the company (Figure 5.13).
Figure 5.13. Form frmOrganization results using employee entities and relationships
Changing our code to use the entity objects was not too difficult. Like the previous solution, the hardest part of this was the recursive methods, although they were simplified this time since we are using the same basic object throughout (the EmployeeEntity). Looking back through our code, we have eliminated all hard-coded strings. Instead, everything is being checked at compile-time, and the code will be much easier to maintain. If we decide to change our schema, breaking changes will be recognized at compile time after quickly refreshing our data layer (discussed in Chapter 12). As another benefit, we do not need either of the two stored procedures anymore. For those of you who are familiar with dynamic SQL and lazy loading, you know that the current code as it stands right now is grossly inefficient in the way it accesses the database and executes queries. We will revisit this example in Chapter 10, when we talk about performance. For now, we are happy that it is working. This solution can be complex because the information we need to display in the form is scattered between the employee row and the contact row. Accessing both rows is easy with our generated framework since all of the relationships are added automatically; simply using Employee.Contact.FirstName will retrieve the correct row and field from the Contacts table. Crossing over to this new row and table, however, results in another query executing on the database. If we merged the rows together in a view, we could cut down on the number of queries we would have to execute. We will demonstrate using a view shortly. Before we finish, lets make the selection of employees easier by populating the ComboBox with a list of employees. To accomplish this task we will need to add our first extension to the business logic layer. Open the EmployeeEntity.cs file in the AW.Data project in the EntityClasses folder.
56
Simple Forms
Import the AW.Data.HelperClasses namespace at the top of the class with the other namespaces. Find the section marked Custom Entity Code and insert the code in Example 5.14. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 public static EmployeeCollection GetEmployees() { RelationCollection Relations = new RelationCollection(); Relations.Add(EmployeeEntity.Relations.ContactEntityUsingContactId); ISortExpression LastFirstAlpha = (ContactFields.LastName | SortOperator.Ascending) & (ContactFields.FirstName | SortOperator.Ascending); EmployeeCollection Employees = new EmployeeCollection(); Employees.GetMulti(null,0, LastFirstAlpha,Relations); return Employees; } public string EmployeeDisplayName { get { return this.Contact.LastName + ", " + this.Contact.FirstName; } }
Example 5.14. Class EmployeeEntityIs GetEmployees() method and EmployeeDisplayName property
The GetEmployees() method is marked as public and static, which indicates that it is available outside the class and without instantiating a member of the class. This will make it easy to use this method later in our form. The GetEmployees() method is basically a search method that returns all employees in the database sorted first by last name and then by first name. In Line 3 we have to create an object known as a RelationCollection., which is a class from the ORMSupportClasses library that contains relations necessary to perform other logic. In this method, we want to sort by using a field in another table. To do so, we need to add the ContactEntityUsingContactID relation from the EmployeeEntity class, which we do in Lines 4-5. Every relation an entity contains is available inside each entitys Relations collection. We choose the correct one based on which related fields we will be accessing later. These relations will become the JOIN statements in SQL when the query is executed. Next we create a SortExpression object in Line 5. SortExpression objects simply tell the database how to sort the data. We use the ISortExpression interface, give a name to the object, and, on the other side of the equal sign, define our sorting expression. We use field objects to specify which field we want to sort by. Note that field objects are different objects than the field indexes that we have used earlier. Field objects are located in the HelperClasses namespace, while field indexes are enumerators available in the root namespace of the generated code. We select the proper field, type a |, and then the correct SortOperator enumerator indicating the direction of the sort. We can specify multiple sorts by using &. The entire expression is saved in the LastFirstAlpha SortExpression object. Next, in Line 8, we create a collection of employees. We fill the collection in Line 9, by calling the GetMulti() method and entering null for the predicate (we will discuss predicates in the next chapter), 0 for the maximum number of items to return, our SortCollection object, and our RelationCollection object. If we do not provide a predicate, we will receive all rows in the database. In Lines 12-18, the EmployeeDisplayName property will make it easier to bind a collection of employee objects to a control, by exposing the name of the related contact in a single property. This property will keep us from having to concatenate the fields ourselves on the form. Go back to the frmOrganization form in the AW.Win project. In the form Load event, add the following code in Example 5.15.
57
Chapter 5
1 private void frmOrganization_Load(object sender, EventArgs e) 2 { cbEmployee.DataSource = EmployeeEntity.GetEmployees(); 3 cbEmployee.DisplayMember = "EmployeeDisplayName"; 4 cbEmployee.ValueMember = EmployeeFieldIndex.EmployeeId.ToString(); 5 6 }
Example 5.15. Form frmOrganization Load event handler
In the Load event of the form, we called the GetEmployees() method, and bind it to the ComboBox. Then we set the DisplayMember and ValueMember to display the correct properties. Notice that we use the field index in Line 5 instead of a string, so we will be notified if the schema changes. Remember that the property in Line 4 is the calculated extension property of our entity that we added in Example 5.14 and is not a real property in our database. Therefore, the name of the property will not be in our field index. Change Line 5-6 of Example 5.12 to the following: EmployeeEntity Employee = new EmployeeEntity( Convert.ToInt32(cbEmployee.SelectedValue)); This change will ensure that we get the EmployeeID from the correct location in the control now that we are populating the control with the employee names. Run the application and open the form. You will get the results in Figure 5.16.
Now that we have extended the business logic layer, the form is more intuitive to use! We will use the principle of adding logic to the business layer as often as we can to make the code in our user interface as brief as possible. Now we can move on to demonstrating how to work with a view.
Simple Forms
Double-click on the forms title bar (not the DataGridView) to have Visual Studio automatically create the forms Load event handler. Import the namespaces in Example 5.5. Also import the AW.Data.TypedViewClasses namespace as well, as this contains all of our typed view classes. Add the code in Example 5.18 to the forms Load event. private void frmCustomers_Load(object sender, EventArgs e) 1 2 { CustomerViewTypedView Customers = new CustomerViewTypedView(); 3 Customers.Fill(); 4 dgvResults.DataSource = Customers; 5 6 }
Example 5.18. Form frmCustomer Load event handler
The code in this form is brief. We simply instantiate the typed view class, then call the Fill() method, with no parameters. This code will retrieve all the rows in the view. The results are then bound to the DataGridView. Compile and run the application, and select Reports > Customers from the menu.
Figure 5.19. Form frmCustomer typed view results PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
59
Chapter 5
You should get results like Figure 5.19. Not bad for only 3 lines of code! Try clicking on the column header and you will notice that you can sort the items in the grid. Sorting the columns is a nice feature to have, and it did not cost us any extra work. A disadvantage to these 3 lines of code, however, is that our columns are being created automatically and are not taking advantage of the space. Some columns are too large for the data they contain and some are too small. Normally, we would not want to show every column in the view but select a subset of columns to display. We need to set up our columns in advance instead of creating them automatically. If we were working with a stored procedure,
setting up columns in advance would be difficult as we would have no idea of the contents of a generic DataTable until run-time. With this typed list, however, we are working with a typed DataTable, meaning that we know exactly which columns are in the view right now, and can use them to help set up our DataGridView. In design view of frmCustomers, look in the Toolbox (normally on the left side of the screen). Locate the CustomerViewTypedView DataTable (Figure 5.20).
Simple Forms
Click to hightlight the CustomerViewTypedView object (your icon will change into a gear) and then click anywhere in your form. The item will be added to the component tray (Figure 5.21). When you have a data item like this sitting in the component tray, you can use it to setup your controls visually. Change the DataSource property of your DataGridView to point to customerViewTypedView1 (Figure 5.22). Now you should see a lot of columns in your DataGridView!
Right-click on the DataGridView and select Edit Columns. Remove columns until you are left with those in Figure 5.23. Configure the properties of each column according to Table 5.24.
ColumnName CustomerId FirstName LastName EmailAddress StateProvinceName CountryRegionName HeaderText ID First Last Email State Country AutoSizeMode AllCells AllCells AllCells Fill AllCells AllCells DefaultCellStyle
Compile and run the application, selecting Reports > Customers. The grid should now look like Figure 5.25. Using the view from our database in order to display the results was easy; the most amount of time we spent was in configuring the columns to be displayed. Having strongly-typed results, however, allowed us to perform this configuration visually, saving as much time as possible and yet still returning predictable results. We are left, however, with one problem that might not be readily apparent. We have already mentioned how hard-coded strings in your code are fragile and unmaintainable. Although we have not created any of these directly, as Visual Studio was creating the code necessary to display our DataGridView, Visual Studio created some on our behalf. Luckily, we can do a search to find these maintainability nightmares before they cause us trouble in the future. Go to Edit > Find and Replace > Find in Files (Ctrl + Shift + F). Enter criteria as shown in Figure 5.26, and press Find All.
PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
61
Chapter 5
You should find 6 occurrences in the frmCustomers.Designer.cs file (one for each of the columns in the DataGridView). Double click on the search result to automatically open the frmCustomers.Designer.cs file. Change every DataPropertyName value that contains a string to the corresponding field index. For example: this.customerIdDataGridViewTextBoxColumn.DataPropertyName = "CustomerId"; ... should be changed to ... this.customerIdDataGridViewTextBoxColumn.DataPropertyName = AW.Data.CustomerViewFieldIndex.CustomerId.ToString();
Simple Forms
If we use our field indexes instead of empty strings, we will be alerted later if the schema changes, which would change the field index, and keep our code from compiling. Visual Studio has no way of knowing the field indexes are available, so we will have to code changes like this by hand. Making these minor changes may seem like a hassle now, but will make our code much more maintainable for the future. From here forward, after you configure DataGridView columns visually, you should do a search like the one above and change any hard-coded strings that you find. After making the changes to each columns DataPropertyName, compile and run the application again. Open the form to make sure the DataGridView still displays correctly. Now we will take a look at using a view as an entity.
In this example, we use the view from the database as an entity (like a table) instead of as a typed view. Since an entity can have relationships, we first instantiate an order entity in Line 3. We then retrieve the corresponding CustomerViewRelatedEntity, using the field CustomerViewRelated. Because we added this relationship between the current view and the SalesOrderHeader table in Chapter 2, we save the time of having to narrow down the rows in the view ourselves. We only have a single entity, however, so in order to display the data in the DataGridView, we create a collection in Line 5-6, and add the entity to it in Line 7. Then we bind the collection to the DataGridView in Line 8. Compile and run the application. Opening the form from the menu should give you the result in Figure 5.28.
This example is not the most useful, since we hard-coded the OrderID! It does serve as an example of why you might choose to add a view as an entity instead of a typed view. The relationships make it easy to navigate between the tables and filter the results. You also have the ability to use this view just like you would a normal entity to make INSERTS, UPDATES, and DELETES, with changes going to the correct tables that make up the view (editing, saving
PURCHASE THIS BOOK (Download or Printed) http://www.lulu.com/content/174470
63
Chapter 5
and deleting entities is discussed in later chapters). A view entity is different from a typed view whose results are readonly. If you have relationships that you could define from the columns of your view, adding the view as as an entity would maximize your options. On the other hand, if you know the item will just be a stand-alone, read only object, then adding it as a typed view would be the best solution. Next we will put our typed list to work and display it on a form.
Here we instantiate the typed list, call the Fill() method, and bind it to the DataGridView. The typed list returns a strongly typed DataTable. Like a typed view, a typed list is read-only. Running the application and displaying the form will give you the exact same results as Figure 5.22. Why, then, would one use a typed list instead of a typed view or a view added as an entity? A typed list is an object we defined in our LLBLGen Pro project, and does NOT correspond to any real object in our database. With a typed list, we have the benefits of a strongly-typed set of results without the hassle of creating a view in the database. Unlike a view that is added as an entity however, no relationships with other entities are possible.
Summary
We covered quite a few topics in this chapter. First, we used several stored procedures to make a hierarchy of managers and employees in the company. We discussed the disadvantages of using the stored procedure and achieved the same results using entity objects and navigating through the relationships between them. We also wrote our first dynamic query that used a field in another table via a relationship and sorted the data from the database. Then we worked with a view of customers that consolidates information from several tables. We explored the difference between a view added as a typed view versus a view added as an entity. We also examined a typed list that mirrored our view almost exactly but was merely an object we created in our LLBLGen Pro project. In the next chapter, we will create a search form that will run some complex queries in our database and learn to work with predicate objects and predicate expressions.
64
Chancellor http://www.lulu.com/josephchancellor
Joseph Chancellor
Joseph Chancellor 2006
Table of Contents
1: Introduction...............................................................................................................................1
Commanding the Army..............................................................................................................................................1 Prerequisites..............................................................................................................................................................2 O/R Mappers..............................................................................................................................................................4 Strongly-Typed Objects..............................................................................................................................................5 Native Language Filter Construction........................................................................................................................7 Strongly-Typed DataSets vs. LLBLGen Pro ..............................................................................................................8 N-Tier Application Design........................................................................................................................................11 Stored Procedures, Inline, and Dynamic SQL........................................................................................................11 Data Type Conversion.............................................................................................................................................14 Visual Studio Advantages........................................................................................................................................14 Summary..................................................................................................................................................................14
3: Solution Setup........................................................................................................................37
Creating the solution...............................................................................................................................................37 Adding references....................................................................................................................................................39 App.config settings..................................................................................................................................................40 LLBLGen Pro Project Overview ..............................................................................................................................41 MDI Parent...............................................................................................................................................................43 Calling a Stored Procedure.....................................................................................................................................49 Alternative Approach Using Entities and Relationships........................................................................................54 Using Typed Views...................................................................................................................................................58 Using Views as Entities............................................................................................................................................63 Using Typed Lists.....................................................................................................................................................64 Summary..................................................................................................................................................................64 Order Search Form..................................................................................................................................................65 Entity Extensions.....................................................................................................................................................75 Form Layout.............................................................................................................................................................76 Saving Entity Data...................................................................................................................................................81 Creating New Entities..............................................................................................................................................82 Deleting an Entity....................................................................................................................................................83
8: Validation................................................................................................................................85
11:Deployment...........................................................................................................................115
Enhancing Performance.......................................................................................................................................101 SQL Server Profiler................................................................................................................................................101 LLBLGen Pro Tracing.............................................................................................................................................105 Simple Prefetching................................................................................................................................................106 Complex Prefetching.............................................................................................................................................107 Multi-threading......................................................................................................................................................108 Code Cleaning.......................................................................................................................................................112 Measure Twice, ClickOnce....................................................................................................................................115 Certificates.............................................................................................................................................................116 Strong Name Key..................................................................................................................................................117 ClickOnce Configuration.......................................................................................................................................117 Deployment............................................................................................................................................................121 Installation.............................................................................................................................................................121
12:Regenerating Code..............................................................................................................123
Inevitable Changes................................................................................................................................................123 Minor Changes......................................................................................................................................................123 Major Changes......................................................................................................................................................129