Sie sind auf Seite 1von 93

1.

INTRODUCTION TO PROJECT

1.1. INTRODUCTION:

The SQL Lightweight Tutoring Module (SQL-LTM) is designed to evaluate the correctness of SQL queries in order to provide useful feedback to the students and guidance in their effort to learn SQL. The XML representations of the queries are processed and analyzed in several steps in order to achieve this. One important step in this process is to transform the query into a logically equivalent one, having a different structure, which is easier to further analyze. This project focuses on the query transformation patterns and their key role in the semantic evaluation of SQL queries of an arbitrary complexity.

1.2. Purpose:
The project is designed to provide syntactically correct SQL statements, detecting semantic errors in the query and making recommendations of how to rewrite the query correctly. For example, when a student is asked to solve a query he/she may come up with various approaches which can be all logically correct. The project will analyze the student's solution by comparing it with a reference which is what the instructor expects in a given context.

1.3. Problem In Existing System:


In existing system there is no tool available which compares the two complex SQL queries semantically and syntactically. There are tools like SQL Tutor and Acharya which only compare queries consisting of a single SELECT block.

1.4.Solution of these Problems:

In proposed system we will be developing a tool which compares the Complex queries with multiple sub-queries nested at arbitrary depth levels are first analyzed from the point of view of their structure. Queries that don't match their reference queries may still be correct, so transformation patterns that preserve logical equivalence are applied in the attempt to convert them to the expected structure. The second step consists of the pair-wise comparison of the corresponding SELECT blocks

1.5 Modules:

Module1:
The first module is designed to provide authentication to students and professors. The registration details are stored in the database and whenever the students and professors logs in, the user credentials are retrieved to check whether the user is an authorized user or not. The front end is designed using Flex.

Module2:
The first step in the query evaluation process is to parse it into an equivalent XML representation that captures all the information.

Module3:
2

The data/content compare operation follows the structural compare step and performs a pairwise comparison of the corresponding basic SELECT blocks on two queries that have the same structure.

Module4:
In this module we will develop transformation pattern where a transformation is a conversion of a query, that is it's XML representation, of one type to a query of another type that is logically equivalent.

2. SOFTWARE REQUIREMENT SPECIFICATION

2.1 Introduction:

The project is designed to provide semantic feedback on otherwise syntactically correct SQL statements, detecting semantic errors in the query and making recommendations of how to rewrite the query correctly. For example, when a student is asked to solve a query he/she may come up with various approaches which can be all logically correct. The project will analyze the student's solution by comparing it with a reference which is what the instructor expects in a given context.

Constraints:

Site Adaption Requirements: Only system Administrator and DBA are authorized to carry out this task jointly. Assumptions and Dependencies It is assumed that all the systems will have the basic HW, SW and Communication Interfaces available. The users are trained in using the application.

Apportioning of Requirements Identify requirements that may be delayed until future versions of the system. All requirements will be met.

2.2 Logical Database Requirements:

Logical requirements connected with the database include: Most of the values are string types but the count is in numbers. Counting of the patients connected with a specific disease is monitored immediately after entering the record of each patient. Accessing rights are limited to authenticated users only. Integrity constraints are maintained by setting the relationships.

Functions: Validity checks on the inputs Data Entry Operators


a) Responses to abnormal situation, including Overflow Periodic Backups Communication facilities : Internet, Telephone Error handling and recovery : Periodic Backup, Error alerts, Maintain Error Logs

b) Effect of parameters

2.3 Performance Requirements:


Presently we are working on three terminals. It is expected that at any point of time three terminals will be in operation simultaneously. The amount of information will be numerical and text oriented and the volume will be limited

2.4 Specific Requirements:


Unique ID 1. 2. 3. 4. 5. 6. 7. 8. Requirement Remarks

2.4.1 Hardware Requirements:

Following Hardware requirements Pentium 4 Processor 1GB RAM

80 GB HDD

2.4.2 Software Requirements:

Following Software is required: SQL Server 2008 C# as Programming Language. LINQ

XML

2.5 Memory Constraints:

RAM with minimum 1 GB and 40 GB HD space

Broad level software requirements :


Requirement Remarks

1.

Make User for Registration.

Provide a Registration and authentication

user interface. 2. 3. 4. Allows User to Upload Files Allows User to View the Files Allows User to Download the Files

2.6 Databases

Professor Registration

Student Registration

Professor Questions and Answers

Student Table

Customer Table

3 SYSTEM DESIGN

3.1 Architecture Diagram:

Project architecture represents no of components we are using as part of our project and the flow of request processing.

3.2 UML Diagrams


Design Patterns brought a paradigm shift in the way object oriented systems are designed. Instead of relying on the knowledge of problem domain alone, design patterns allow past experience to be utilized while solving new problems. Traditional object oriented design (OOD) approaches such as Booch, OMT, etc. advocated identification and specification of individual objects and classes. Design Patterns on the other hand promote identification and specification of collaborations of objects and

10

classes. However, much of the focus of recent research has been towards identification and cataloging of new design patterns. The effort has been to assimilate knowledge gained from designing systems of the past, in various problem domains. The problem analysis phase has gained little benefit from this paradigm. Most projects still use traditional object oriented analysis (OOA) approaches to identify classes from the problem description. Responsibilities to those classes are assigned based upon the obvious description of entities given in the problem definition.

Pattern Oriented Technique (POT) is a methodology for identifying interactions among classes and mapping them to one or more design patterns. However, this methodology also uses traditional OOA for assigning class responsibilities. As a result, its interaction oriented design phase (driven by design patterns) receives its input in terms of class definitions that might not lead to best possible design.

The missing piece here is the lack of an analysis method that can help in identifying class definitions and the collaborations between them which would be amenable to application of interaction oriented design. There are two key issues here. First is to come up with good class definitions and the second is to identify good class collaborations.

It has been observed in that even arriving at good class definitions from the given problem definition is non-trivial. The key to various successful designs is the presence of abstract classes (such as an event handler) which are not modeled as entities in the physical world and hence do not appear in the problem description. In anticipating change has been proposed as the method for identifying such abstract classes in a problem domain. Another difficult task is related to assignment of responsibilities to entities identified from the problem description. Different responsibility assignments could lead to completely different designs. Current approaches such as Coad and Yourdon, POT etc. follow the simple approach of using entity descriptions in the problem statement to define classes and fix responsibilities. We propose to follow a flexible approach towards assigning responsibilities to classes so that the best responsibility assignment can be chosen.

11

The second issue is to identify class collaborations. Techniques such as POT analyze interactions among different sets of classes as specified in the problem description. Such interacting classes are then grouped together to identify design patterns that may be applicable. However, as mentioned earlier, only the interactions among obvious classes are determined currently. Other interactions involving abstract classes not present in the problem or interactions that become feasible due to different responsibility assignments are not considered. We present some techniques that enable the designer to capture such interactions as well.

3.2.1 Interaction Based Analysis and Design Top-down approach This approach is applicable to situations where the designer knows the solution to the given problem. It is true for problem domains that have well established high-level solutions and different implementations vary in low level details (for e.g. Enterprise Resource Planning (ERP) systems). Her main concern is to realize that solution in a way such that the implemented system has nice properties such as maintainability and reusability etc.

To achieve this goal, the system designer selects appropriate design patterns that form the building blocks of her solution. Having obtained this design template (design type), she maps the classes and objects participating in those patterns to the entities of the problem domain. This mapping implicitly defines the responsibilities of various classes/objects that represent those entities. To help clarify the concept, consider a scenario where an architect is assigned the task of building a flyover. Flyover construction is an established science and the architect knows the solution to the problem. She starts by identifying component patterns such as road strip, support pillars, side railings and so on. Having done that, she maps the participating objects to actual entities in the problem domain. This would involve defining the length and width of the road strip based upon the space constraints specified in the problem. The height and weight of the pillars get decided based upon the load requirements specified. The entry and exit points get decided based

12

upon the geography of the location and so on. This results in a concrete design instance. Some new classes or objects, not existing in the domain model, may also have to be introduced for a successful instantiation of the design template. For instance, the problem domain may not model an abstract entity such as an event handler which may be a participant in some portion of the design template. Such generic classes/objects may be drawn from a common repository of utility classes. Interaction driven analysis phase here is simple since the interactions (in the form of design patterns) are already well established and directly obtained from the knowledge base.

Bottom-up approach This approach is applicable in scenarios where interactions in the problem domain are not well understood and need to be discovered and explored. This situation is a fundamental problem faced by the designers of object oriented systems. It relates to the fact that objects oriented analysis (OOA) does not help much in creating a solution to the problem at hand. The analysis phase is mainly concerned with enhancing the understanding of the problem domain. This knowledge is then later used by a problem solving approach to come up with a solution possessing good design properties. As a result, at the end of the analysis phase the designer has a set of well defined components that need to be assembled together for realizing a solution. For instance, to build a route finder application the OOA phase helps in modeling the domain objects such as roads, vehicles, cities, addresses etc. but does not actually provide a solution for finding routes between two given addresses. This is similar to having various pieces of a jigsaw puzzle but the puzzle still needs to be solved. The problem in software systems is further complicated by the fact that there is generally no unique solution to a problem. There are always trade-offs at various stages and the resulting designs are a reflection of the choices made at those stages. In the jigsaw puzzle example this is similar to the situation where different sets of the same puzzle are available each differing from another in terms of the design of its component pieces. Some component designs may help in solving the puzzle faster and more efficiently than others.

13

The bottom-up approach helps in such situations where the entities in the problem domain have been identified by traditional OOA techniques but multiple choices exist in terms of assigning responsibilities to those entities. Unlike top-down approach, the mapping of responsibilities to entities is not dictated by the design solution specified by the designer. Instead, the task of the designer here is to try various responsibility assignments and create an interaction specification involving those objects. The objective of this interaction driven analysis is to obtain an interaction specification that helps in arriving at a solution with best design characteristics possible. Having identified the entities in the domain, the starting point for the designer is to identify various alternatives available for assigning responsibilities to individual objects. Her domain knowledge helps her in this task. Given these alternatives for potential object definitions and standard utility objects (such as schedulers, event handlers etc.), the next step is to find compositions of these building blocks (i.e. interactions of these objects) that provide alternative solutions to the problem. This task is a non-trivial one especially when done manually. There are just too many combinations to be considered, for any human designer to obtain alternative solutions in a reasonable amount of time. We need to apply (semi-)automated software composition techniques based on some formal specification. Several such approaches have been recently investigated in the context of e-services. These include workflow based approaches and AI Planning based techniques. Other formal techniques for specifying composition include Petri-net based models, automata-based models; temporal logics etc. from verification community and XQuery, XML constraint tools based techniques from data management community . The resulting candidate compositions (i.e. interaction specifications) then need to be compared with existing design patterns either manually or automatically. It is not beyond imagination to visualize that with advancement in automated composition techniques, new design patterns may get identified during this process. For instance, techniques such as Reinforcement Learning have resulted in new novel solutions in various domains such as playing Backgammon. In such a case, the resulting designs may need to be evaluated manually. The best design among the alternatives is then chosen for implementing the system.

14

Open Issues Identifying interactions


This is a crucial step in the analysis phase and the success of remaining phases depends on it. The issue here is to identify interactions which are not evident from the problem description but may hold the key to an efficient design solution. The bottom-up approach proposed in this paper takes a step in this direction but a lot more work is needed. The analysis method should be such that it is able to incorporate abstract classes such as event handlers, proxies etc. Moreover, current analysis methods map entities to responsibilities of individual classes in terms of services they provide and methods they invoke on other classes. However, an entity may be realized by a set of classes. For instance, an adapter class hides the interface of an adoptee class and they collectively provide the desired functionality. Similarly, an abstraction and its implementation provide a single functionality through separate classes resulting in increased maintainability. The analysis method needs to be able to determine when is it appropriate to realize an entity responsibility by means of multiple interacting classes.

Representation of Class Responsibilities Since we need to specify different alternative class responsibilities, as in bottom-up approach, a mechanism is required to document them in a machine interpretable format. Some of these responsibilities would get captured in the form of methods a class exports or methods it invokes on other classes. However, other responsibilities with respect to its interaction with other classes need to be explicitly specified. These may include pre- and post-conditions for different method invocations, other properties such as hasSameInterfaceAs <another class>, hidesInterfaceOf <another class> etc. Languages such as could be used as it is or extended for this purpose.

Language for Specifying Design Patterns

15

The approaches for OO Design proposed in this paper favor automatic techniques over manual ones for reasons described earlier. This means that we need a mechanism to be able to express design patterns in a format amenable to be read and interpreted by programs. Some attempts have been made at defining such pattern description languages [14, 13]. One of these or some variation of these could be used to express design patterns in a formal language.

Comparison of Software Designs


Once we have alternative designs available, they need to be compared to arrive at the best one. Each design may consist of multiple design patterns. The criteria here would not be to simply count the number of design patterns used but to evaluate the interaction between patterns and also between other design elements used. This would involve an understanding of good and bad design interactions and an ability to identify them in a given design. The final challenge would be to do it automatically.

16

3.2.2 Structural Diagram


Class Diagram: Class diagrams identify the class structure of a system, including the properties and methods of each class. Also depicted are the various relationships that can exist between classes, such as an inheritance relationship. The Class diagram is one of the most widely used diagrams from the UML specification

Class Diagrams are given to depict interactions

Object Diagram:

17

Object diagrams model instances of classes. This type of diagram is used to describe the system at a particular point in time. Using this technique, you can validating the class diagram and it's multiplicity rules with real-world data, and record test scenarios. From a notation standpoint, Object diagrams borrow elements from Class diagrams.

Component Diagram: Component diagrams fall under the category of an implementation diagram, a kind of diagram that models the implementation and deployment of the system. A Component Diagram, in particular, is used to describe the dependencies between various software components such as the dependency between executable files and source files. This information is similar to that within make files, which describe source code dependencies and can be used to properly compile an application.

3.2.3 Deployment Diagram


Deployment diagrams are another model in the implementation diagram category. The Deployment diagram models the hardware used in implementing a system and the association between those hardware components. Components can also be shown on a Deployment diagram to show the location of their deployment. Deployment diagrams can also be used early on in the design phase to document the physical architecture of a system.

3.3 Behavioral Diagrams:


Use Case Diagram: Use Case diagrams identify the functionality provided by the system (use cases), the users who interact with the system (actors), and the association between the users and the functionality. Use Cases are used in the Analysis phase of software development to articulate the high-level requirements of the system. The primary goals of Use Case diagrams include:

18

Providing a high-level view of what the system does Identifying the users ("actors") of the system Determining areas needing human-computer interfaces

Use Cases extend beyond pictorial diagrams. In fact, text-based use case descriptions are often used to supplement diagrams, and explore use case functionality in more detail.

login

createProfile

professor

addQuestion

addAnswer

logout

Professor Usecase Diagram

19

login

createProfile

viewQuestion

student answeringQuestion

compareResult

logout

Student Use case Diagram

20

3.4 Sequence Diagram


Sequence diagrams document the interactions between classes to achieve a result, such as a use case. The Sequence diagram lists objects horizontally, and time vertically, and models these messages over time.

21

Professor

Student login

Application

Database

checks gets response adds question related to sql stores add answer related to query retrives stores get response login checks get response view question related to sql query checks display question answer the sql query stores compare checks logout logout get result

3.5 Collaboration Diagram

22

Collaboration diagrams model the interactions between objects. This type of diagram is a cross between an object diagram and a sequence diagram. It uses free-form arrangement of objects which makes it easier to see all iterations involving a particular object.

Professor

Student

1: login 4: adds question related to sql query 7: add answer related to query 21: logout

10: login 13: view question related to sql query 16: answer the sql query 18: compare 22: logout 2: checks 5: stores 8: stores 11: checks 14: checks 17: stores 19: checks 3: gets response 6: retrives 9: get response 12: get response 15: display question 20: get result

Application

Database

3.6 State chart Diagram

State diagrams, are used to document the various modes ("state") that a class can go through, and the events that cause a state transition.

3.7 Activity Diagram

23

Activity diagrams are used to document workflows in a system, from the business level down to the operational level. The general purpose of Activity diagrams is to focus on flows driven by internal processing vs. external events.

4. SYSTEM IMPLEMENTATION

4.1 Database Functions 1. Database should have the facility to a) insert records, b) update, c) edit, d) delete (Restricted users only), e) Search and f) Sort records.

g) Create and edit master and transaction records. h) All the interaction to and from the database should be through Web Pages. i) Database should have following tables:I.Data Entry Personnel: First Name, Last Name, Address, Contact Details, Qualification, Place of work, Designation. etc II.Data Entry Points: user credentials.

2. Authentication Users at all levels should be authenticated before giving them access.

3. Analysis Outputs of all analysis should be in the form of

24

a) Data in tabular form b) Graphical representation of Performance.

4.2Pseudo Code 4.2.1 Login:

xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" width="100%" height="100%"> <fx:Script> <![CDATA[ import mx.collections.XMLListCollection; import mx.controls.Alert; import mx.core.FlexGlobals; import mx.rpc.events.ResultEvent; [Bindable]public var xml1_doc:XML; [Bindable]public var xmllist1_doc:XMLListCollection; protected function operation1_resultHandler(event:ResultEvent):void

25

{ if(event.result.toString()=="Student Login Successful") { FlexGlobals.topLevelApplication.viewstack1.selectedIndex=1; LINQ.retrieve_questions.send(); } if(event.result.toString()=="Professor Login Successful") { FlexGlobals.topLevelApplication.viewstack1.selectedIndex=2; } } protected function operation2_resultHandler(event:ResultEvent):void { //Alert.show(event.result.toString()); xml1_doc=new XML(event.result); xmllist1_doc=new XMLListCollection(xml1_doc.children()); }

protected function button1_clickHandler(event:MouseEvent):void { LINQ.login.send(username.text,pwd.text); username.text=""; pwd.text="";

26

protected function linkbutton1_clickHandler(event:MouseEvent):void { FlexGlobals.topLevelApplication.viewstack1.selectedIndex=3; }

protected function linkbutton2_clickHandler(event:MouseEvent):void { FlexGlobals.topLevelApplication.viewstack1.selectedIndex=4; }

protected function button2_clickHandler(event:MouseEvent):void { username.text=""; pwd.text=""; }

]]> </fx:Script> <fx:Declarations> <s:WebService id="LINQ" wsdl="http://localhost:1047/Linq123/service.asmx?wsdl">

27

<s:operation name="login" result="operation1_resultHandler(event)"/> <s:operation name="retrieve_questions" result="operation2_resultHandler(event)"/> </s:WebService> <mx:StringValidator source="{username}" property="text" requiredFieldError="Enter UserName" trigger="{login}" triggerEvent="click"/> <mx:StringValidator source="{pwd}" property="text" requiredFieldError="Enter Password" trigger="{login}" triggerEvent="click"/> </fx:Declarations> <s:BorderContainer width="374" height="218" backgroundColor="#080000" cornerRadius="20" fontFamily="Times horizontalCenter="46" verticalCenter="-96"> <s:Label x="10" y="68" color="#CAC3C3" text="User Name"/> <s:Label x="10" y="98" color="#C5BEBE" text="Password"/> <s:TextInput x="100" y="68" id="username" /> <s:TextInput id="pwd" x="100" y="98" displayAsPassword="true"/> <s:Button x="100" y="142" id="login" label="Log In" click="button1_clickHandler(event)"/> <s:Button x="178" y="142" label="Cancel" click="button2_clickHandler(event)"/> <mx:LinkButton x="40" y="185" label="Student Reg??" click="linkbutton1_clickHandler(event)" color="#B8B4B4"/> <mx:LinkButton x="150" y="187" label="Professor Reg??" New Roman" fontSize="14"

click="linkbutton2_clickHandler(event)" color="#B3ACAC"/>

28

<s:Label width="100%" height="2" backgroundColor="#CCCCCC" chromeColor="#CCCCCC" color="#CCCCCC" horizontalCenter="0" verticalCenter="-69"/> <s:Label x="19" y="19" chromeColor="#CCCCCC" color="#CCCCCC" fontSize="20" text="Login"/> </s:BorderContainer> </s:Group>

4.2.2 Professor Registration:


<s:Group xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" width="100%" height="100%" color="#B9B3B3" contentBackgroundColor="#0B0A0A"> <fx:Script> <![CDATA[ import mx.collections.ArrayCollection; import mx.controls.Alert; import mx.core.FlexGlobals; import mx.rpc.events.ResultEvent;

protected function operation1_resultHandler(event:ResultEvent):void { Alert.show(event.result.toString()); } [Bindable]public var ClgName:ArrayCollection=new ArrayCollection

29

( [ {label:"KMIT", data:1}, {label:"SREC", data:1}, {label:"SRMT", data:1}, {label:"Aurora", data:1}, ]); [Bindable]public var University:ArrayCollection=new ArrayCollection ( [ {label:"JNTU", data:1}, {label:"OU", data:1}, {label:"KU", data:1}, ]); protected function button1_clickHandler(event:MouseEvent):void { if(fname.text=="") { Alert.show("Enter First Name"); } else if(lname.text=="") { Alert.show("Enter Last Name");

30

} else if(username.text=="") { Alert.show("Enter UserName"); } else if(pwd.text=="") { Alert.show("Enter Password"); } else {

LINQ.prof_reg.send(fname.text,lname.text,clgname.selectedItem.label,university.selectedItem.label,username.tex t,pwd.text); } }

protected function button2_clickHandler(event:MouseEvent):void { FlexGlobals.topLevelApplication.viewstack1.selectedIndex=0; fname.text=""; lname.text=""; username.text=""; pwd.text="";

31

]]> </fx:Script> <fx:Declarations> <s:WebService id="LINQ" wsdl="http://localhost:1047/Linq123/service.asmx?wsdl"> <s:operation name="prof_reg" result="operation1_resultHandler(event)"/> </s:WebService> <mx:StringValidator source="{fname}" property="text" requiredFieldError="Enter FirstName" trigger="{register}" triggerEvent="click"/> <mx:StringValidator source="{lname}" property="text" requiredFieldError="Enter LastName" trigger="{register}" triggerEvent="click"/> <mx:StringValidator source="{username}" property="text" requiredFieldError="Enter UserName" trigger="{register}" triggerEvent="click"/> <mx:StringValidator source="{pwd}" property="text" requiredFieldError="Enter Password" trigger="{register}" triggerEvent="click"/> </fx:Declarations> <s:BorderContainer width="465" height="314" backgroundColor="#080808" color="#0A0909"

32

cornerRadius="20" fontFamily="Times New Roman" fontSize="14" horizontalCenter="15" verticalCenter="-80"> <s:Label x="10" y="60" color="#B9B4B4" text="First Name"/> <s:Label x="10" y="90" color="#B3ADAD" text="Last Name"/> <s:Label x="10" y="120" color="#BAB5B5" text="College Name"/> <s:Label x="10" y="150" color="#B6B0B0" text="University"/> <s:Label x="10" y="180" color="#B7B0B0" text="User Name"/> <s:Label x="10" y="210" color="#B7ADAD" text="Password"/> <s:TextInput id="fname" x="130" y="60" color="#FEFCFC"/> <s:TextInput id="lname" x="130" y="90" color="#FCF9F9"/> <s:DropDownList id="clgname" x="130" y="120" color="#FAF8F8" dataProvider="{ClgName}" labelField="label" Name"></s:DropDownList> <s:DropDownList id="university" x="130" y="150" color="#FDFAFA" dataProvider="{University}" labelField="label" University"></s:DropDownList> <s:TextInput id="username" x="130" y="180" color="#FCFBFB"/> <s:TextInput id="pwd" x="130" y="210" color="#FCFBFB"/> <s:Button x="130" y="241" id="register" label="Register" click="button1_clickHandler(event)"/> <s:Button x="208" y="241" label="Cancel" click="button2_clickHandler(event)"/> <s:Label x="0" y="37" width="100%" height="2" backgroundColor="#CCCCCC"/> <s:Label x="10" y="17" color="#CCCCCC" fontSize="16" text="Professor Registration"/> <s:Image x="378" y="47" width="80" height="99" prompt="Select prompt="Select College

source="assets/Derrick_Cogburn_Associate_Professor.JPG"/>

33

</s:BorderContainer> </s:Group>

4.2.3 Student Registration:


<s:Group xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" width="100%" height="100%" color="#F8F3F3">

<fx:Script> <![CDATA[ import mx.collections.ArrayCollection; import mx.controls.Alert; import mx.core.FlexGlobals; import mx.rpc.events.ResultEvent; protected function button1_clickHandler(event:MouseEvent):void { if(fname.text=="") { Alert.show("Enter First Name"); } else if(lname.text=="") {

34

Alert.show("Enter Last Name"); } else if(username.text=="") { Alert.show("Enter UserName"); } else if(pwd.text=="") { Alert.show("Enter Password"); } else {

LINQ.stud_reg.send(fname.text,lname.text,clgname.selectedItem.label,university.selectedItem.label,username.tex t,pwd.text); } } [Bindable]public var ClgName:ArrayCollection=new ArrayCollection ( [ {label:"KMIT", data:1}, {label:"SREC", data:1}, {label:"SRMT", data:1}, {label:"Aurora", data:1},

35

]); [Bindable]public var University:ArrayCollection=new ArrayCollection ( [ {label:"JNTU", data:1}, {label:"OU", data:1}, {label:"KU", data:1}, ]); protected function operation1_resultHandler(event:ResultEvent):void { Alert.show(event.result.toString()); }

protected function button2_clickHandler(event:MouseEvent):void { FlexGlobals.topLevelApplication.viewstack1.selectedIndex=0; fname.text=""; lname.text=""; username.text=""; pwd.text=""; }

]]>

36

</fx:Script>

<fx:Declarations> <s:WebService id="LINQ" wsdl="http://localhost:1047/Linq123/service.asmx?wsdl"> <s:operation name="stud_reg" result="operation1_resultHandler(event)"/> </s:WebService> <mx:StringValidator source="{fname}" property="text" requiredFieldError="Enter FirstName" trigger="{register}" triggerEvent="click"/> <mx:StringValidator source="{lname}" property="text" requiredFieldError="Enter LastName" trigger="{register}" triggerEvent="click"/> <mx:StringValidator source="{username}" property="text" requiredFieldError="Enter UserName" trigger="{register}" triggerEvent="click"/> <mx:StringValidator source="{pwd}" property="text" requiredFieldError="Enter Password" trigger="{register}" triggerEvent="click"/> </fx:Declarations> <s:BorderContainer width="453" height="270" backgroundColor="#080808" color="#060000" cornerRadius="20" fontFamily="Times New Roman" fontSize="14" horizontalCenter="15" verticalCenter="-117">

37

<s:Label x="10" y="60" color="#B9B4B4" text="First Name"/> <s:Label x="10" y="90" color="#B3ADAD" text="Last Name"/> <s:Label x="10" y="120" color="#BAB5B5" text="College Name"/> <s:Label x="10" y="150" color="#B6B0B0" text="University"/> <s:Label x="10" y="180" color="#B7B0B0" text="User Name"/> <s:Label x="10" y="210" color="#B7ADAD" text="Password"/> <s:TextInput id="fname" x="131" y="60" color="#080808"/> <s:TextInput id="lname" x="131" y="90" color="#0A0909"/> <s:DropDownList id="clgname" x="131" y="120" color="#070707" dataProvider="{ClgName}" labelField="label" Name"></s:DropDownList> <s:DropDownList id="university" x="131" y="150" color="#060606" dataProvider="{University}" labelField="label" University"></s:DropDownList> <s:TextInput id="username" x="131" y="180" color="#050505"/> <s:TextInput id="pwd" x="131" y="210"/> <s:Button x="130" y="241" id="register" label="Register" click="button1_clickHandler(event)" color="#050505"/> <s:Button color="#050505"/> <s:Label color="#CCCCCC"/> <s:Label x="17" y="19" color="#CCCCCC" fontSize="16" text="Student Registration"/> <s:Image x="351" y="45" width="95" height="95" source="assets/index.jpg"/> x="0" y="40" width="100%" height="2" backgroundColor="#CCCCCC" x="208" y="241" label="Cancel" click="button2_clickHandler(event)" prompt="Select prompt="Select College

38

</s:BorderContainer> </s:Group>

5.SYSTEM TESTING

Technologies Used:
5.1 Dot Net Technology:
C#: Microsoft.NET Framework:

The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet. The .NET Framework is designed to fulfill the following objectives: To provide a consistent object-oriented programming environment whether object code is stored and executed locally but Internet-distributed, or executed remotely. To provide a code-execution environment that minimizes software deployment and versioning conflicts.

39

To provide a code-execution environment that guarantees safe execution of code, including code created by an unknown or semi-trusted third party. To provide a code-execution environment that eliminates the performance problems of scripted or interpreted environments. To make the developer experience consistent across widely varying types of applications, such as Windows-based applications and Web-based applications. To build all communication on industry standards to ensure that code based on the .NET Framework can integrate with any other code. The .NET Framework has two main components: the common language runtime and

the .NET Framework class library. The common language runtime is the foundation of the .NET Framework. You can think of the runtime as an agent that manages code at execution time, providing core services such as memory management, thread management, and remoting, while also enforcing strict type safety and other forms of code accuracy that ensure security and robustness. In fact, the concept of code management is a fundamental principle of the runtime. Code that targets the runtime is known as managed code, while code that does not target the runtime is known as unmanaged code. The class library, the other main component of the .NET Framework, is a comprehensive, object-oriented collection of reusable types that you can use to develop applications ranging from traditional command-line or graphical user interface (GUI) applications to applications based on the latest innovations provided by ASP.NET, such as Web Forms and XML Web services. The .NET Framework can be hosted by unmanaged components that load the common language runtime into their processes and initiate the execution of managed code, thereby creating a software environment that can exploit both managed and unmanaged features. The .NET Framework not only provides several runtime hosts, but also supports the development of third-party runtime hosts. The following illustration shows the relationship of the common language runtime and the class library to your applications and to the overall system. The illustration also shows how managed code operates within a larger architecture. Features of the Common Language Runtime: 40

The common language runtime manages memory, thread execution, code execution, code safety verification, compilation, and other system services. These features are intrinsic to the managed code that runs on the common language runtime.

With regards to security, managed components are awarded varying degrees of trust, depending on a number of factors that include their origin (such as the Internet, enterprise network, or local computer). This means that a managed component might or might not be able to perform file-access operations, registry-access operations, or other sensitive functions, even if it is being used in the same active application.

The runtime enforces code access security. For example, users can trust that an executable embedded in a Web page can play an animation on screen or sing a song, but cannot access their personal data, file system, or network. The security features of the runtime thus enable legitimate Internet-deployed software to be exceptionally rich in features. The runtime also enforces code robustness by implementing a strict type- and codeverification infrastructure called the common type system (CTS). The CTS ensures that all managed code is self-describing. The various Microsoft and third-party language compilers generate managed code that conforms to the CTS. This means that managed code can consume other managed types and instances, while strictly enforcing type fidelity and type safety.

In adition, the managed environment of the runtime eliminates many common software issues. For example, the runtime automatically handles object layout and manages references to objects, releasing them when they are no longer being used. This automatic memory management resolves the two most common application errors, memory leaks and invalid memory references.

The runtime also accelerates developer productivity. For example, programmers can write applications in their development language of choice, yet take full advantage of the 41

runtime, the class library, and components written in other languages by other developers. Any compiler vendor who chooses to target the runtime can do so. Language compilers that target the .NET Framework make the features of the .NET Framework available to existing code written in that language, greatly easing the migration process for existing applications. While the runtime is designed for the software of the future, it also supports software of today and yesterday. Interoperability between managed and unmanaged code enables developers to continue to use necessary COM components and DLLs.The runtime is designed to enhance performance. Although the common language runtime provides many standard runtime services, managed code is never interpreted. .NET Framework Class Library

The .NET Framework class library is a collection of reusable types that tightly integrate with the common language runtime. The class library is object oriented, providing types from which your own managed code can derive functionality. This not only makes the .NET Framework types easy to use, but also reduces the time associated with learning new features of the .NET Framework. In addition, third-party components can integrate seamlessly with classes in the .NET Framework. For example, the .NET Framework collection classes implement a set of interfaces that you can use to develop your own collection classes. Your collection classes will blend seamlessly with the classes in the .NET Framework.

As you would expect from an object-oriented class library, the .NET Framework types enable you to accomplish a range of common programming tasks, including tasks such as string management, data collection, database connectivity, and file access. In addition to these common tasks, the class library includes types that support a variety of specialized development scenarios. For example, you can use the .NET Framework to develop the following types of applications and services: Console applications. Scripted or hosted applications. 42

Windows GUI applications (Windows Forms). ASP.NET applications. XML Web services. Windows services. For example, the Windows Forms classes are a comprehensive set of reusable types

that vastly simplify Windows GUI development. If you write an ASP.NET Web Form application, you can use the Web Forms classes.

5.2 ASP.NET: ASP.NET Overview:


ASP.NET is a unified Web development model that includes the services necessary for you to build enterprise-class Web applications with a minimum of coding. ASP.NET is part of the .NET Framework, and when coding ASP.NET applications you have access to classes in the .NET Framework. You can code your applications in any language compatible with the common language runtime (CLR), including Microsoft Visual Basic, C#, JScript .NET, and J#. These languages enable you to develop ASP.NET applications that benefit from the common language runtime, type safety, inheritance, and so on.

What is ASP.Net?
ASP.NET is a server side scripting technology that enables scripts (embedded in web pages) to be executed by an Internet server.

ASP.NET is a Microsoft Technology ASP stands for Active Server Pages ASP.NET is a program that runs inside IIS IIS (Internet Information Services) is Microsoft's Internet server IIS comes as a free component with Windows servers IIS is also a part of Windows 2000 and XP Professional

ASP.NET includes:

43

A page and controls framework


The ASP.NET compiler Security infrastructure State-management facilities Application configuration Health monitoring and performance features Debugging support An XML Web services framework Extensible hosting environment and application life cycle management An extensible designer environment

Page and Controls Framework:


The ASP.NET page and controls framework is a programming framework that runs on a Web server to dynamically produce and render ASP.NET Web pages. ASP.NET Web pages can be requested from any browser or client device, and ASP.NET renders markup (such as HTML) to the requesting browser. As a rule, you can use the same page for multiple browsers, because ASP.NET renders the appropriate markup for the browser making the request. However, you can design your ASP.NET Web page to target a specific browser, such as Microsoft Internet Explorer 6, and take advantage of the features of that browser. ASP.NET supports mobile controls for Web-enabled devices such as cellular phones, handheld computers, and personal digital assistants (PDAs). ASP.NET Web pages are completely object-oriented. Within ASP.NET Web pages you can work with HTML elements using properties, methods, and events. The ASP.NET page framework removes the implementation details of the separation of client and server inherent in Web-based applications by presenting a unified model for responding to client events in code that runs at the server. The framework also automatically maintains the state of a page and the controls on that page during the page processing life cycle. . Controls are written once, can be used in many pages, and are integrated into the ASP.NET Web page that they are placed in during rendering.

44

The ASP.NET page and controls framework also provides features to control the overall look and feel of your Web site via themes and skins. You can define themes and skins and then apply them at a page level or at a control level. In addition to themes, you can define master pages that you use to create a consistent layout for the pages in your application. A single master page defines the layout and standard behavior that you want for all the pages (or a group of pages) in your application. You can then create individual content pages that contain the page-specific content you want to display. When users request the content pages, they merge with the master page to produce output that combines the layout of the master page with the content from the content page.

Security Infrastructure:
In addition to the security features of .NET, ASP.NET provides an advanced security infrastructure for authenticating and authorizing user access as well as performing other securityrelated tasks. You can authenticate users using Windows authentication supplied by IIS, or you can manage authentication using your own user database using ASP.NET forms authentication and ASP.NET membership. Additionally, you can manage the authorization to the capabilities and information of your Web application using Windows groups or your own custom role database using ASP.NET roles. You can easily remove, add to, or replace these schemes depending upon the needs of your application ASP.NET always runs with a particular Windows identity so you can secure your application using Windows capabilities such as NTFS Access Control Lists (ACLs), database permissions, and so on.

State-Management Facilities:
ASP.NET provides intrinsic state management functionality that enables you to store information between page requests, such as customer information or the contents of a shopping cart. You can save and manage application-specific, session-specific, page-specific, user-specific, and developer-defined information. This information can be independent of any controls on the page. ASP.NET offers distributed state facilities, which enable you to manage state information across multiple instances of the same application on one computer or on several computers.

45

ASP.NET Configuration:
ASP.NET applications use a configuration system that enables you to define configuration settings for your Web server, for a Web site, or for individual applications. You can make configuration settings at the time your ASP.NET applications are deployed and can add or revise configuration settings at any time with minimal impact on operational Web applications and servers. ASP.NET configuration settings are stored in XML-based files.

Health Monitoring and Performance Features:


ASP.NET includes features that enable you to monitor health and performance of your ASP.NET application. ASP.NET health monitoring enables reporting of key events that provide information about the health of an application and about error conditions. These events show a combination of diagnostics and monitoring characteristics and offer a high degree of flexibility in terms of what is logged and how it is logged. ASP.NET supports two groups of performance counters accessible to your applications:

The ASP.NET system performance counter group The ASP.NET application performance counter group

Debugging Support: ASP.NET takes advantage of the run-time debugging infrastructure to provide cross-language and cross-computer debugging support. You can debug both managed and unmanaged objects, as well as all languages supported by the common language runtime and script languages. In addition, the ASP.NET page framework provides a trace mode that enables you to insert instrumentation messages into your ASP.NET Web pages.

XML Web Services Framework


ASP.NET supports XML Web services. An XML Web service is a component containing business

functionality that enables applications to exchange information across firewalls using standards like HTTP and XML messaging. XML Web services are not tied to a particular component technology or

46

object-calling convention. As a result, programs written in any language, using any component model, and running on any operating system can access XML Web services. Extensible Hosting Environment and Application Life-Cycle Management ASP.NET includes an extensible hosting environment that controls the life cycle of an application from when a user first accesses a resource (such as a page) in the application to the point at which the application is shut down. While ASP.NET relies on a Web server (IIS) as an application host, ASP.NET provides much of the hosting functionality itself. The architecture of ASP.NET enables you to respond to application events and create custom HTTP handlers and HTTP modules. Extensible Designer Environment: ASP.NET includes enhanced support for creating designers for Web server controls for use with a visual design tool such as Visual Studio. Designers enable you to build a design-time user interface for a control, so that developers can configure your control's properties and content in the visual design tool. Features: ASP Technology was good in displaying text and pictures. People wanted following features: Reactive site ie Receive information from user Update user information Facility to attach database to the web site Personalize your web site Improve the look and feel of the web site In 2003 Microsoft Developed ASP .NET 2.0 and after extensive testing released it in 2005 along with Visual Studio 2005. The major features are :-

5.3 ADO.NET

5.3.1 INTRODUCTION TO ADO.NET

47

ADO.NET is the latest implementation of Microsofts universal data access strategy. In the past few years we have gone through many changes to classic ADO as Microsoft made changes, bug-fixes, and enhancements to the venerable libraries. These libraries have made the foundation for many Web sites and applications that are in place today. ADO.NET will be no different in this respect, as Microsoft is positioning ADO.NET to be the primary data access technology for the .NET Framework. This will ensure that the data access architecture is mature and robust, since all the Common Language Runtime (CLR) languages will be using these namespaces for their primary means of communicating with data providers.

5.3.2 UNDERSTANDING ADO.NET

ADO.NET has taken XML to heart with rich support for XML data, both as a data consumer and as a data provider. Later versions of classic ADO had some support for XML, but the format was difficult to use unless you were exchanging it with another ADO client. The XML documents that ADO.NET creates are consistent with the XML specification and are what is known as well-defined documents, making them suitable for consumption by any data access technology that understands XML. We can take a plain XML document with just a root node and open it in ADO.NET, add data to it, and save it back out.

ADO.NET has a couple of new ways to serve data, which made the Record set obsolete. These new objects are the Dataset and the Data Reader.

The Dataset is in-memory relational database. There are many collections in a Dataset, namely the Data Tables collection, Data Views collection, and Data Relations collection. A programmer will create one or more Data Table objects in a Dataset and fill them with data. A Data Table contains a collection of Data Rows, each of which contains a collection of Data Columns. We can optionally create Data Views based on these Data Tables, and even define relations to enforce data integrity.

48

The process of filling a Data Table with data is simple, and provides us with a copy of the data from the data source. The Dataset does not maintain a connection to the data source. With this copy of our data, the application can enable the user to add, edit, and remove data. The application can then enable the user to save this data back to the original data source. As a matter of fact, this data can be saved to any other data source, persisted to disk, and/or transferred just as if it were any other file. The key to this functionality is the reliance upon XML, and the disconnected nature of ADO.NET.

The Dataset requires a Data Adapter to actually interact with a data source. The Data Adapter represents the connection to a data source and the commands used to communicate with the data source to fill a Dataset or update a data source. After we are finished adding or updating data in the Dataset, the application would then call the Update method of the Data Adapter to INSERT, UPDATE, and DELETE records as appropriate at the data source. Note that we dont have to commit our changes back to the original source; that is, we can transfer data to another data source as long as we have a Data Adapter that understands how to communicate between the Dataset and the final data source. This really serves to emphasize the total and complete disconnected nature of ADO.NET.

5.4 Sql Server:


Microsoft SQL Server management comprises a wide variety of administration tasks, including: Registering servers and assigning passwords. Reconfiguring network connectivity. Configuring standby servers. Setting server configuration options. Managing SQL Server messages. Etc

In most cases, you do not need to reconfigure the server. The default settings for the server components, configured during SQL Server Setup, allow you to run SQL Server immediately after it is installed. However, server management is necessary in those 49

situations where you want to add new servers, set up special server configurations, change the network connections, or set server configuration options to improve SQL Server performance.

5.4.1 CREATING A DATABASE: o To create a database determines the name of the database, its owner (the user who creates the database), its size, and the files and file groups used to store it. Before creating a database, consider that: Permission to create a database defaults to members of the sysadmin and dbcreator fixed server roles, although permissions can be granted to other users. The user who creates the database becomes the owner of the database. A maximum of 32,767 databases can be created on a server. The name of the database must follow the rules for identifiers. Three types of files are used to store a database:

Primary files: These files contain the startup information for the database. The primary files are also used to store data. Every database has one primary file.

Secondary files: These files hold all the data that does not fit in the primary data file. Databases do not need secondary data files if the primary file is large enough to hold all the data in the database. Some databases may be large enough to need multiple secondary data files, or they may use secondary files on separate disk drives to spread the data across multiple disks.

Transaction log: These files hold the log information used to recover the database. There must be at least one transaction log file for each database, although there may be more than one. The minimum size for a log file is 512 kilobytes (KB).

50

When a database is created, all the files that comprise the database are filled with zeros to overwrite any existing data left on the disk by previously deleted files. Although this means that the files take longer to create, this action prevents the operating system from having to fill the files with zeros when data is written to the files for the first time during usual database operations. This improves the performance of day-to-day operations.

CREATE A DATABASE USING THE CREATE DATABASE WIZARD (Enterprise Manager): To create a database using the Create Database Wizard Expand a server group, and then expand the server in which to create a database. On the Tools menu, click Wizards. Expand Database. Double-click Create Database Wizard. Complete the steps in the wizard. CREATING AND MODIFYING A TABLE: After you have designed the database, the tables that will store the data in the database can be created. The data is usually stored in permanent tables. Tables are stored in the database files until they are deleted and are available to any user who has the appropriate permissions. TEMPORARY TABLES: You can also create temporary tables. Temporary tables are similar to permanent tables, except temporary tables are stored in temp db and are deleted automatically when no longer in use. The two types of temporary tables, local and global, differ from each other in their names, their visibility, and their availability. Local temporary tables have a single number sign (#) as the first character of their names; they are visible only to the current connection for the user; and they are deleted when the user disconnects from instances of Microsoft SQL Server 2000. Global temporary tables have two number 51

signs (##) as the first characters of their names; they are visible to any user after they are created; and they are deleted when all users referencing the table disconnect from SQL Server. TABLE PROPERTIES: You can define up to 1,024 columns per table. Table and column names must follow the rules for identifiers; they must be unique within a given table, but you can use the same column name in different tables in the same database. You must also define a data type for each column. Although table names must be unique for each owner within a database, you can create multiple tables with the same name if you specify different owners for each. You can create two tables named employees and designate Jonah as the owner of one and Sally as the owner of the other. When you need to work with one of the employees tables, you can distinguish between the two tables by specifying the owner with the name of the table. Before using the component the component has to attach to the application which can be done by double clicking on the solution name on the solution explorer. Browse the component and attach to the solution. Once the component is attached. importing it into the application as can use the component.

5.5 SQL Server 2008 The next version of SQL Server, SQL Server 2008, was released (RTM) on August 6, 2008[11] and aims to make data management self-tuning, self organizing, and self maintaining with the development of SQL Server Always On technologies, to provide near-zero downtime. SQL Server 2008 also includes support for structured and semi-structured data, including digital media formats for pictures, audio, video and other multimedia data. In current versions, such multimedia data can be stored as BLOBs (binary large objects), but they are generic bitstreams. Intrinsic awareness of multimedia data will allow specialized functions to be performed on them. According to Paul Flessner, senior Vice President, Server Applications, Microsoft Corp., SQL Server 2008 can be a data storage backend for different varieties of data: XML, email, time/calendar, 52

file, document, spatial, etc as well as perform search, query, analysis, sharing, and synchronization across all data types. The Full-Text Search functionality has been integrated with the database engine. According to a Microsoft technical article, this simplifies management and improves performance. Spatial data will be stored in two types. A "Flat Earth" (GEOMETRY or planar) data type represents geospatial data which has been projected from its native, spherical, coordinate system into a plane. A "Round Earth" data type (GEOGRAPHY) uses an ellipsoidal model in which the Earth is defined as a single continuous entity which does not suffer from the singularities such as the international dateline, poles, or map projection zone "edges". Approximately 70 methods are available to represent spatial operations for the Open Geospatial Consortium Simple Features for SQL, Version 1.1. SQL Server includes better compression features, which also helps in improving scalability. It enhanced the indexing algorithms and introduced the notion of filtered indexes. It also includes Resource Governor that allows reserving resources for certain users or workflows. It also includes capabilities for transparent encryption of data (TDE) as well as compression of backups. SQL Server 2008 supports the ADO.NET Entity Framework and the reporting tools, replication, and data definition will be built around the Entity Data Model. SQL Server Reporting Services will gain charting capabilities from the integration of the data visualization products from Dundas Data Visualization, Inc., which was acquired by Microsoft. On the management side, SQL Server 2008 includes the Declarative Management Framework which allows configuring policies and constraints, on the entire database or certain tables, declaratively. The version of SQL Server Management Studio included with SQL Server 2008 supports IntelliSense for SQL queries against a SQL Server 2008 Database Engine. SQL Server 2008 also makes the databases available via Windows PowerShell providers and management functionality available as Cmdlets, so that the server and all the running instances can be managed from Windows PowerShell.

53

5.6 SQL Server 2008 R2 SQL Server 2008 R2 (formerly codenamed SQL Server "Kilimanjaro") was announced at TechEd 2009, and was released to manufacturing on April 21, 2010. SQL Server 2008 R2 adds certain features to SQL Server 2008 including a master data management system branded as Master Data Services, a central management of master data entities and hierarchies. Also Multi Server Management, a centralized console to manage multiple SQL Server 2008 instances and services including relational databases, Reporting Services, Analysis Services & Integration Services. SQL Server 2008 R2 includes a number of new services, including PowerPivot for Excel and SharePoint, Master Data Services, StreamInsight, Report Builder 3.0, Reporting Services Add-in for SharePoint, a Data-tier function in Visual Studio that enables packaging of tiered databases as part of an application, and a SQL Server Utility named UC (Utility Control Point), part of AMSM (Application and Multi-Server Management) that is used to manage multiple SQL Servers.

5.7 Linq:
Language-Integrated Query (LINQ) adds query capabilities to Visual Basic and provides simple and powerful capabilities when you work with all kinds of data. Rather than sending a query to a database to be processed, or working with different query syntax for each type of data that you are searching, LINQ introduces queries as part of the Visual Basic language. It uses a unified syntax regardless of the type of data. LINQ enables you to query data from a SQL Server database, XML, in-memory arrays and collections, ADO.NET datasets, or any other remote or local data source that supports LINQ. You can do all this with common Visual Basic language elements. Because your queries are written in the Visual Basic language, your query results are returned as stronglytyped objects. These objects support IntelliSense, which enables you to write code faster and catch errors in your queries at compile time instead of at run time. LINQ queries can be used as the source of additional queries to refine results. They can also be bound to controls so that users can easily view and modify your query results. 54

For example, the following code example shows a LINQ query that returns a list of customers from a collection and groups them based on their location.
VB Copy

Dim customers As List(Of Customer) = GetCustomerList() Dim customersByCountry = From cust In customers Order By cust.Country, cust.City Group By CountryName = cust.Country Into RegionalCustomers = Group, Count() Order By CountryName For Each country In customersByCountry Console.WriteLine(country.CountryName & " (" & country.Count & ")" & vbCrLf) For Each customer In country.RegionalCustomers Console.WriteLine(vbTab & customer.CompanyName & " (" & customer.City & ")") Next Next

A LINQ provider maps your Visual Basic LINQ queries to the data source being queried. When you write a LINQ query, the provider takes that query and translates it into commands that the data source will be able to execute. The provider also converts data from the source to the objects that make up your query result. Finally, it converts objects to data when you send updates to the data source.
The Structure of a LINQ Query

A LINQ query, often referred to as a query expression, consists of a combination of query clauses that identify the data sources and iteration variables for the query. A query 55

expression can also include instructions for sorting, filtering, grouping, and joining, or calculations to apply to the source data. Query expression syntax resembles the syntax of SQL; therefore, you may find much of the syntax familiar. A query expression starts with a From clause. This clause identifies the source data for a query and the variables that are used to refer to each element of the source data individually. These variables are named range variables or iteration variables. The From clause is required for a query, except for Aggregate queries, where the From clause is optional. After the scope and source of the query are identified in the From or Aggregate clauses, you can include any combination of query clauses to refine the query. For details about query clauses, see Visual Basic LINQ Query Operators later in this topic. For example, the following query identifies a source collection of customer data as the customers variable, and an iteration variable named cust.
VB Copy

Dim queryResults = From cust In customers Select cust.CompanyName This example is a valid query by itself; however, the query becomes far more powerful when you add more query clauses to refine the result. For example, you can add a Where clause to filter the result by one or more values. Query expressions are a single line of code; you can just append additional query clauses to the end of the query. You can break up a query across multiple lines of text to improve readability by using the underscore (_) linecontinuation character. The following code example shows an example of a query that includes a Where clause.
VB Copy

Dim queryResults = From cust In customers Where cust.Country = "USA"

56

Another powerful query clause is the Select clause, which enables you to return only selected fields from the data source. LINQ queries return enumerable collections of strongly typed objects. A query can return a collection of anonymous types or named types. You can use the Select clause to return only a single field from the data source. When you do this, the type of the collection returned is the type of that single field. You can also use the Select clause to return multiple fields from the data source. When you do this, the type of the collection returned is a new anonymous type. You can also match the fields returned by the query to the fields of a specified named type. The following code example shows a query expression that returns a collection of anonymous types that have members populated with data from the selected fields from the data source.
VB Copy

Dim queryResults = From cust In customers Where cust.Country = "USA" Select cust.CompanyName, cust.Country LINQ queries can also be used to combine multiple sources of data and return a single result. This can be done with one or more From clauses, or by using the Join or Group Join query clauses. The following code example shows a query expression that combines customer and order data and returns a collection of anonymous types containing customer and order data.
VB Copy

Dim queryResults = From cust In customers, ord In orders Where cust.CustomerID = ord.CustomerID Select cust, ord You can use the Group Join clause to create a hierarchical query result that contains a collection of customer objects. Each customer object has a property that contains a collection of all orders for that customer. The following code example shows a query expression that combines customer and order data as a hierarchical result and returns a collection of anonymous 57

types. The query returns a type that includes a CustomerOrders property that contains a collection of order data for the customer. It also includes an OrderTotal property that contains the sum of the totals for all the orders for that customer. (This query is equivalent to a LEFT OUTER JOIN.)
VB Copy

Dim queryResults = From cust In customers Group Join ord In orders On cust.CustomerID Equals ord.CustomerID Into CustomerOrders = Group, OrderTotal = Sum(ord.Total) Select cust.CompanyName, cust.CustomerID, CustomerOrders, OrderTotal

There are several additional LINQ query operators that you can use to create powerful query expressions. The next section of this topic discusses the various query clauses that you can include in a query expression. For details about Visual Basic query clauses, see Queries (Visual Basic).
Visual Basic LINQ Query Operators

The classes in the System.Linq namespace and the other namespaces that support LINQ queries include methods that you can call to create and refine queries based on the needs of your application. Visual Basic includes keywords for the most common query clauses, as described by the following table. Term Definition Either a From clause or an Aggregate clause is required to begin a query. A From clause specifies a source collection and an iteration variable for a 58

From

Clause

(Visual Basic)

query. For example:


VB Copy

' Returns the company name for all customers for whom ' State is equal to "WA". Dim names = From cust In customers Where cust.State = "WA" Select cust.CompanyName

Optional. Declares a set of iteration variables for a query. For example:


VB Copy

Select

Clause ' Returns the company name and ID value for each ' customer as a collection of a new anonymous type. Dim customerList = From cust In customers Select cust.CompanyName, cust.CustomerID

(Visual Basic)

If a Select clause is not specified, the iteration variables for the query consist of the iteration variables specified by the From or Aggregate clause. Optional. Specifies a filtering condition for a query. For example: Where Clause VB
Copy

(Visual Basic)

59

' Returns all product names for which the Category of ' the product is "Beverages". Dim names = From product In products Where product.Category = "Beverages" Select product.Name

Optional. Specifies the sort order for columns in a query. For example:
VB Copy

Order

By

Clause (Visual ' Returns a list of books sorted by price in Basic) ' ascending order. Dim titlesAscendingPrice = From b In books Order By b.price

Optional. Combines two collections into a single collection. For example:


VB Copy

Join

Clause ' Returns a combined collection of all of the ' processes currently running and a descriptive ' name for the process taken from a list of ' descriptive names. Dim processes = From proc In Process.GetProcesses Join desc In processDescriptions

(Visual Basic)

60

On proc.ProcessName Equals desc.ProcessName Select proc.ProcessName, proc.Id, desc.Description

Optional. Groups the elements of a query result. Can be used to apply aggregate functions to each group. For example:
VB Copy

Group

By

Clause (Visual ' Returns a list of orders grouped by the order date ' and sorted in ascending order by the order date. Basic) Dim orderList = From order In orders Order By order.OrderDate Group By OrderDate = order.OrderDate Into OrdersByDate = Group

Optional. Combines two collections into a single hierarchical collection. For example:
VB

Group

Join Copy

Clause (Visual Basic) ' Returns a combined collection of customers and ' customer orders. Dim customerList = From cust In customers Group Join ord In orders On cust.CustomerID Equals ord.CustomerID

61

Into CustomerOrders = Group, TotalOfOrders = Sum(ord.Total) Select cust.CompanyName, cust.CustomerID, CustomerOrders, TotalOfOrders

Either a From clause or an Aggregate clause is required to begin a query. An Aggregate clause applies one or more aggregate functions to a collection. For example, you can use the Aggregate clause to calculate a sum for all the elements returned by a query.
VB Copy

' Returns the sum of all order totals. Dim orderTotal = Aggregate order In orders Aggregate Clause (Visual Basic) You can also use the Aggregate clause to modify a query. For example, you can use the Aggregate clause to perform a calculation on a related query collection.
VB Copy

Into Sum(order.Total)

' Returns the customer company name and largest ' order total for each customer. Dim customerMax = From cust In customers Aggregate order In cust.Orders 62

Into MaxOrder = Max(order.Total) Select cust.CompanyName, MaxOrder

Optional. Computes a value and assigns it to a new variable in the query. For example:
VB Copy

Let

Clause ' Returns a list of products with a calculation of (Visual Basic) ' a ten percent discount. Dim discountedProducts = From prod In products Let Discount = prod.UnitPrice * 0.1 Where Discount >= 50 Select prod.Name, prod.UnitPrice, Discount

Optional. Restricts the values of the current iteration variable to eliminate duplicate values in query results. For example:
VB Copy

Distinct Clause (Visual Basic) ' Returns a list of cities with no duplicate entries. Dim cities = From item In customers Select item.City Distinct

63

Optional. Bypasses a specified number of elements in a collection and then returns the remaining elements. For example:
VB Copy

Skip

Clause ' Returns a list of customers. The first 10 customers ' are ignored and the remaining customers are ' returned. Dim customerList = From cust In customers Skip 10

(Visual Basic)

Optional. Bypasses elements in a collection as long as a specified condition is true and then returns the remaining elements. For example:
VB Copy

Skip

While

Clause (Visual ' Returns a list of customers. The query ignores all ' customers until the first customer for whom Basic) ' IsSubscriber returns false. That customer and all ' remaining customers are returned. Dim customerList = From cust In customers Skip While IsSubscriber(cust)

Take

Clause

Optional. Returns a specified number of contiguous elements from the start of a collection. For example:

(Visual Basic)

64

VB Copy

' Returns the first 10 customers. Dim customerList = From cust In customers Take 10

Optional. Includes elements in a collection as long as a specified condition is true and bypasses the remaining elements. For example:
VB Copy

Take

While ' Returns a list of customers. The query returns Clause (Visual ' customers until the first customer for whom Basic) ' HasOrders returns false. That customer and all ' remaining customers are ignored. Dim customersWithOrders = From cust In customers Order By cust.Orders.Count Descending Take While HasOrders(cust)

For details about Visual Basic query clauses, see Queries (Visual Basic). You can use additional LINQ query features by calling members of the enumerable and queryable types provided by LINQ. You can use these additional capabilities by calling a particular query operator on the result of a query expression. For example, the following code example uses the Union method to combine the results of two queries into one query result. It uses the ToList(Of TSource) method to return the query result as a generic list. 65

VB Copy

Public Function GetAllCustomers() As List(Of Customer) Dim customers1 = From cust In domesticCustomers Dim customers2 = From cust In internationalCustomers

Dim customerList = customers1.Union(customers2)

Return customerList.ToList() End Function

For details about additional LINQ capabilities, see Standard Query Operators Overview.
Connecting to a Database by Using LINQ to SQL

In Visual Basic, you identify the SQL Server database objects, such as tables, views, and stored procedures, that you want to access by using a LINQ to SQL file. A LINQ to SQL file has an extension of .dbml. When you have a valid connection to a SQL Server database, you can add a LINQ to SQL Classes item template to your project. This will display the Object Relational Designer (O/R designer). The O/R Designer enables you to drag the items that you want to access in your code from the Server Explorer/Database Explorer onto the designer surface. The LINQ to SQL file adds a DataContext object to your project. For examples with step-by-step instructions, see How to: Query a Database by Using LINQ (Visual Basic) and How to: Call a Stored Procedure by Using LINQ (Visual Basic).
Visual Basic Features That Support LINQ

66

Visual Basic includes other notable features that make the use of LINQ simple and reduce the amount of code that you must write to perform LINQ queries. These include the following:

Anonymous types, which enable you to create a new type based on a query result. Implicitly typed variables, which enable you to defer specifying a type and let the compiler infer the type based on the query result.

Extension methods, which enable you to extend an existing type with your own methods without modifying the type itself.

For details, see Visual Basic Features That Support LINQ.


Deferred and Immediate Query Execution

Query execution is separate from creating a query. After a query is created, its execution is triggered by a separate mechanism. A query can be executed as soon as it is defined (immediate execution), or the definition can be stored and the query can be executed later (deferred execution). By default, when you create a query, the query itself does not execute immediately. Instead, the query definition is stored in the variable that is used to reference the query result. When the query result variable is accessed later in code, such as in a ForNext loop, the query is executed. This process is referred to as deferred execution. Using the ToList or ToArray methods will also force immediate execution. This can be useful when you want to execute the query immediately and cache the results. For more information about these methods, see Converting Data Types. For more information about query execution, see Writing Your First LINQ Query (Visual Basic).
XML in Visual Basic

The XML features in Visual Basic include XML literals and XML axis properties, which enable you easily to create, access, query, and modify XML in your code. XML literals enable 67

you to write XML directly in your code. The Visual Basic compiler treats the XML as a firstclass data object. The following code example shows how to create an XML element, access its sub-elements and attributes, and query the contents of the element by using LINQ.
VB Copy

' Place Imports statements at the top of your program. Imports <xmlns:ns="http://SomeNamespace">

Module Sample1

Sub SampleTransform()

' Create test by using a global XML namespace prefix.

Dim contact = <ns:contact> <ns:name>Patrick Hines</ns:name> <ns:phone begin_of_the_skype_highlighting 0144 206-555ns:type="home">206-555-0144

end_of_the_skype_highlighting</ns:phone> <ns:phone ns:type="work">425-555-0145 425-555-

begin_of_the_skype_highlighting 0145

end_of_the_skype_highlighting</ns:phone> </ns:contact>

Dim phoneTypes = <phoneTypes> <%= From phone In contact.<ns:phone> 68

Select <type><%= phone.@ns:type %></type> %> </phoneTypes>

Console.WriteLine(phoneTypes) End Sub

End Module

For more information, see XML in Visual Basic.


Related Resources

Topic

Description Describes the XML features in Visual Basic that can be queried and

XML in Visual Basic

that enable you to include XML as first-class data objects in your Visual Basic code.

Queries Basic) LINQ

(Visual

Provides reference information about the query clauses that are available in Visual Basic.

(Language-

Includes general information, programming guidance, and samples for LINQ. Includes general information, programming guidance, and samples for LINQ to SQL. Includes general information, programming guidance, and samples for LINQ to Objects. Includes links to general information, programming guidance, and 69

Integrated Query)

LINQ to SQL

LINQ to Objects

LINQ to ADO.NET

(Portal Page)

samples for LINQ to ADO.NET. Includes general information, programming guidance, and samples for LINQ to XML.

LINQ to XML

6.TESTING PROCESS
6.1 Introduction
A primary purpose for testing is to detect software failures so that defects may be uncovered and corrected. This is a non-trivial pursuit. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions. The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.

Defects and failures o Not all software defects are caused by coding errors. One common source of expensive defects is caused by requirements gaps, e.g., unrecognized requirements that result in errors of omission by the program designer. A common source of requirements gaps is non-functional requirements such as testability, scalability, maintainability, usability, performance, and security. o Software faults occur through the following process. A programmer makes an error (mistake), which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing a failure. Not all defects will necessarily result in failures. For example, defects in dead code will never result in failures. A defect can turn into a failure when the environment is changed. Compatibility

70

o A frequent cause of software failure is compatibility with another application or new operating system (or, increasingly web browser version). In the case of lack of backward compatibility this can occur because the programmers have only considered coding the programs for, or testing the software, on the latest operating system they have access to or else, in isolation (no other conflicting applications running at the same time) or under 'ideal' conditions ('unlimited' memory; 'superfast' processor; latest operating system incorporating all updates, etc). In effect, everything is running "as intended" but only when executing at the same time on the same machine with the particular combination of software and/or hardware. Input combinations and preconditions o A problem with software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible, even with a simple product. This means that the number of defects in a software product can be very large and defects that occur infrequently are difficult to find in testing. More significantly, non-functional dimensions of quality (how it is supposed to be versus what it is supposed to do) -- for example, usability, scalability, performance, compatibility, reliability -- can be highly subjective; something that constitutes sufficient value to one person may be intolerable to another. Static vs. dynamic testing o There are many approaches to software testing. Reviews, walkthroughs or inspections are considered as static testing, whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing. The former can be, and unfortunately in practice often is, omitted, whereas the latter takes place when programs begin to be used for the first time - which is normally considered the beginning of the testing stage. This may actually begin before the program is 100% complete in order to test particular sections of code (modules or discrete functions). For example, Spreadsheet programs are, by their very nature, tested to a large extent "on the fly" during the build process as the result of some calculation or text manipulation is shown interactively immediately after each formula is entered. 71

Software verification and validation Software testing is used in association with verification and validation:
Verification: Have we built the software right (i.e., does it match the specification?)? It is process based. Validation: Have we built the right software (i.e., is this what the customer wants?)? It is product based.

The software testing team o Software testing can be done by software testers. Until the 1950s the term "software tester" was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing there have been established different roles: test lead/manager, test designer, tester, test automater/automation developer, and test administrator. Software Quality Assurance (SQA) o Though controversial, software testing may be viewed as an important part of the software quality assurance (SQA) process. In SQA, software process specialists and auditors take a broader view on software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in defect rate. What constitutes an acceptable defect rate depends on the nature of the software. An arcade video game designed to simulate flying an airplane would presumably have a much higher tolerance for defects than mission critical software such as that used to control the functions of an airliner. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies. Testing methods o Software testing methods are traditionally divided into black box testing and white box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases.

6.2 Black box testing:


Black box testing treats the software as a black box without any knowledge of internal implementation. Black box testing methods include equivalence partitioning, boundary value analysis,

72

all-pairs testing, fuzz testing, model-based testing, traceability matrix, exploratory testing and specification-based testing.

Specification-based testing
Specification-based testing aims to test the functionality according to the requirements. Thus, the tester inputs data and only sees the output from the test object. This level of testing usually requires thorough test cases to be provided to the tester who then can simply verify that for a given input, the output value (or behavior), is the same as the expected value specified in the test case. Specification-based testing is necessary but insufficient to guard against certain risks.

Advantages and disadvantages


The black box tester has no "bonds" with the code, and a tester's perception is very simple: a code MUST have bugs. Using the principle, "Ask and you shall receive," black box testers find bugs where programmers don't. BUT, on the other hand, black box testing is like a walk in a dark labyrinth without a flashlight, because the tester doesn't know how the back end was actually constructed. That's why there are situations when 1. A black box tester writes many test cases to check something that can be tested by only one test case and/or 2. Some parts of the back end are not tested at all Therefore, black box testing has the advantage of an unaffiliated opinion on the one hand and the disadvantage of blind exploring on the other.

6.3 White box testing:


White box testing, by contrast to black box testing, is when the tester has access to the internal data structures and algorithms (and the code that implement these) Types of white box testing

73

The following types of white box testing exist:


api testing - Testing of the application using Public and Private APIs. code coverage - creating tests to satisfy some criteria of code coverage. For example, the test designer can create tests to cause all statements in the program to be executed at least once.

fault injection methods. mutation testing methods. static testing - White box testing includes all static testing.

Code completeness evaluation


White box testing methods can also be used to evaluate the completeness of a test suite that was created with black box testing methods. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.

Two common forms of code coverage are:

Function coverage, which reports on functions executed and statement coverage, which reports on the number of lines executed to complete the test.

They both return coverage metric, measured as a percentage.

6.4 Grey Box Testing:


In recent years the term grey box testing has come into common usage. This involves having access to internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level. Manipulating input data and formatting output do not qualify as grey-box because the input and output are clearly outside of the black-box we are calling the software under test. This is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. Grey box testing may also include reverse engineering to determine, for instance, boundary values or error messages.

74

Acceptance testing Acceptance testing can mean one of two things: 1. A smoke test is used as an acceptance test prior to introducing a build to the main

testing process. 2. (UAT). Regression Testing Regression testing is any type of software testing that seeks to uncover software regressions. Such regressions occur whenever software functionality that was previously working correctly stops working as intended. Typically regressions occur as an unintended consequence of program changes. Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged. Non Functional Software Testing Special methods exist to test non-functional aspects of software.

Acceptance testing performed by the customer is known as user acceptance testing

Performance testing checks to see if the software can handle large quantities of data or

users. This is generally referred to as software scalability. This activity of Non Functional Software Testing is often times referred to as Load Testing.

Usability testing is needed to check if the user interface is easy to use and understand. Security testing is essential for software which processes confidential data and to

prevent system intrusion by hackers.

Internationalization and localization is needed to test these aspects of software, for

which a pseudo localization method can be used. In contrast to functional testing, which establishes the correct operation of the software (correct in that it matches the expected behavior defined in the design requirements), non-functional testing verifies that the software functions properly even when it receives invalid or unexpected inputs. Software fault injection, in the form of fuzzing is an example of non-functional testing.

75

Testing process A common practice of software testing is performed by an independent group of testers after the functionality is developed before it is shipped to the customer. This practice often results in the testing phase being used as project buffer to compensate for project delays, thereby compromising the time devoted to testing. Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes. In counterpoint, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test-driven software development" model. In this process unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). Testing can be done on the following levels:

Unit testing tests the minimal software component, or module. Each unit (basic

component) of the software is tested to verify that the detailed design for the unit has been correctly implemented. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.

Integration testing exposes defects in the interfaces and interaction between integrated

components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.

System testing tests a completely integrated system to verify that it meets its

requirements.

System integration testing verifies that a system is integrated to any external or third

party systems defined in the system requirements. Before shipping the final version of software, alpha and beta testing are often done additionally: Regression testing After modifying software, either for a change in functionality or to fix defects, a regression test re-runs previously passing tests on the modified software to ensure that the modifications haven't

76

unintentionally caused a regression of previous functionality. Regression testing can be performed at any or all of the above test levels. These regression tests are often automated. More specific forms of regression testing are known as sanity testing, when quickly checking for bizarre behavior, and smoke testing when testing for basic functionality. Benchmarks may be employed during regression testing to ensure that the performance of the newly modified software will be at least as acceptable as the earlier version or, in the case of code optimization, that some real improvement has been achieved. Testing Tools Program testing and fault detection can be aided significantly by testing tools and debuggers. Types of testing/debug tools include features such as:

Program monitors, permitting full or partial monitoring of program code including:


o

Instruction Set Simulator, permitting complete instruction level monitoring and trace facilities

Program animation, permitting step-by-step execution and conditional breakpoint at source level or in machine code

code coverage reports

Formatted dump or Symbolic debugging, tools allowing inspection of program variables on error or at chosen points

Benchmarks, allowing run-time performance comparisons to be made

Measuring software testing Usually, quality is constrained to such topics as correctness, completeness, security,[citation needed] but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. There are a number of common software measures, often called "metrics", which are used to measure the state of the software or the adequacy of the testing. Testing artifacts

77

Software testing process can produce several artifacts. Test case A test case in software engineering normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result. This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. Test script The test script is the combination of a test case, test procedure, and test data. Initially the term was derived from the product of work created by automated regression test tools. Today, test scripts can be manual, automated, or a combination of both.

Test data The most common test manually or in automation is retesting and regression testing, In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project.

Test suite The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.

78

Test plan A test specification is called a test plan. The developers are well aware what test plans will be executed and this information is made available to the developers. This makes the developers more cautious when developing their code. This ensures that the developers code is not passed through any surprise test case or test plans.

Test harness The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness. Software testing certification types Certifications can be grouped into: exam-based and education-based.
Exam-based certifications: For these there is the need to pass an exam, which can also be learned

by self-study: e.g. for ISTQB or QAI.


Education-based certifications: Education based software testing certifications are instructor-led

sessions, where each course has to be passed, e.g. IIST (International Institute for Software Testing). Testing certifications
CATE offered by the International Institute for Software Testing CBTS offered by the Brazilian Certification of Software Testing (ALATS) Certified Software Tester (CSTE) offered by the Quality Assurance Institute (QAI) Certified Software Test Professional (CSTP) offered by the International Institute

Software Testing

79

CSTP (TM) (Australian Version) offered by K. J. Ross & Associates ISEB offered by the Information Systems Examinations Board ISTQB Certified Tester, Foundation Level (CTFL) offered by the International Software Testing

Qualification Board
ISTQB Certified Tester, Advanced Level (CTAL) offered by the International Software Testing

Qualification Board
CBTS offered by the Brazilian Certification of Software Testing (ALATS) TMPF Next Foundation offered by the Examination Institute for Information Science

Quality assurance certifications


CSQE offered by the American Society for Quality CSQA offered by the Quality Assurance Institute CQIA offered by the American Society for Quality CMSQ offered by the Quality Assurance Institute

Controversy Some of the major software testing controversies include: What constitutes responsible software testing? Members of the "context-driven" school of testing believe that there are no "best practices" of testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation. Agile vs. traditional Should testers learn to work under conditions of uncertainty and constant change or should they aim at process "maturity"? The agile testing movement has received growing popularity since 2006 mainly in commercial circles, whereas government and military software providers are slow to embrace this methodology, and mostly still hold to CMMI. Exploratory test vs. scripted

80

Should tests be designed at the same time as they are executed or should they be designed beforehand? Manual testing vs. automated Some writers believe that test automation is so expensive relative to its value that it should be used sparingly. Others, such as advocates of agile development, recommend automating 100% of all tests.[citation needed] More in particular, test-driven development states that developers should write unit-tests of the x-unit type before coding the functionality. The tests then can be considered as a way to capture and implement the requirements. Software design vs. software implementation Should testing be carried out only at the end or throughout the whole process? Who watches the watchmen? The idea is that any form of observation is also an interaction that the act of testing can also affect that which is being teste

6.5 System Interfaces:


Description of various interfaces used in the system is described in the subsequent paragraph.

6.5.1 User Interfaces: User friendly interfaces as depicted below will be used.

81

1. Screen formats are required to be created with following features:a) b) c) d) e) User friendly. Indicate the Mandatory fields by asterisk (*). Fill up default values where ever possible Give combo boxes in all input screens. User , and Data entry personnel details should be stored in the database through

Save button. f) Authentication of users should be carried out where ever required.

2. Web Page or window layouts Each screen should have Menu driven facility Uniformity Consistency 3. Outputs Analytical outputs should be supported by graphs. a) For each output there should be provision of printing c) Emails should be well structured d) Each paper output should be formatted to A4 size paper.

82

4. DOs (Input) a) Each input box to be supported by label b) Give tool tips where required c) Give formats (mm/ dd /yy) d) Provide Tab Index e) Input elements without visible labels should continue to contain text (search, login)

Enter search t

Search site

f)

Enter login na

Password

Login

g) radio inputs should have one option checked as default

5. DONTs (Inputs) a) Dont distract users from their goals b) Dont use dark background with dark font colors c) Dont use too many colors

6. Error messages Give small error messages like Incorrect Data or Incorrect Date Format or Field is required and supplement it with Ok Button.

83

84

7. OUTPUT SCREENS

Login

85

Student Registration

86

Professor Registration

87

Professor Question and Answer

88

Student selecting the question

89

Student Answer

90

8.Conclusion
The goals that are achieved by the software are: Instant Access Improved Productivity Optimum Utilization of resources Efficient management of records Simplification of the operations Less processing time and required information User Friendly Portable and flexible for further enhancements.

Intensive care has been taken by the development team to meet all the requirements of the client. Each module has undergone stringent testing procedures and the integration testing activity has been performed by the team leaders of each module. After completion, execution and successful demonstration, we confirm that the Product has reached client requirements and is ready for launch. Finally we conclude that the project stands up in maintaining its sole motive of betterment to the society and work for a social cause.

91

9.Future Enhancements
It is not possible to develop a system that makes all the requirements of the user .user requirements keep changing as the system is being used. Some of the future enhancements that can be done to this system are:
1) The System can be enhanced to increase look and feel of the Application. 2) Will try to scale the application.

92

93

Das könnte Ihnen auch gefallen