Sie sind auf Seite 1von 91

Project Report

on

Greyhound Fleet Manager


Project work submitted in partial fulfillment
of the requirement for the award of the degree

Master of Computer Applications


By

XXXXXXXXX
( Regd.No: XXXXXXXX )

Under the Guidance of

Mr. XXXXXXXX
(Project Coordinator, XXXXXXXXXXX)

<Paste your university emblem here>

XXXXXXX UNIVERSITY
CERTIFICATE

This is to certify that this project entitled “Greyhound Fleet Manager” is a


bonafide work carried out by XXXXXXXX bearing Hall Ticket No: XXXXXX in
XXXXXXXXXXXXX and submitted to XXXXX University in partial fulfillment of
the requirements for the award of Master of Computer Applications.

Project Guide External Examiner


Principal
ACKNOWLEDGMENT

“Task successful” makes everyone happy. But the happiness


will be gold without glitter if we didn’t state the persons who have
supported us to make it a success.
Success will be crowned to people who made it a reality but
the people whose constant guidance and encouragement made it
possible will be crowned first on the eve of success.
This acknowledgement transcends the reality of formality
when we would like to express deep gratitude and respect to all those
people behind the screen who guided, inspired and helped me for the
completion of our project work.
I consider myself lucky enough to get such a good project.
This project would add as an asset to my academic profile.
I would like to express my thankfulness to my project guide,
Mr. XXXXX for his constant motivation and valuable help through the
project work, and I express my gratitude to Mr. XXXXXXX, Director of
XXXXXXXXX, Hyderabad, for his constant supervision, guidance and
co-operation through out the project.

I also extend my thanks to my Team Members for their co-


operation during my course.
Finally I would like to thanks my friends for their co-
operation to complete this project.

XXXXXXX
ABSTRACT

To develop a feature rich web solution for a typical transportation


department of a particular manufacturing company in a typical software
development environment.

1. Title of the project: Greyhound Fleet Manager


2. Abstract:
The ‘Greyhound Fleet Manager’ keeps track the information about the

Vehicles, Maintenance, Repair, Parts, Employees, Location and Vendors. It also

keeps track the maintenance performed for different vehicles which are used

for transportation.

The super users of the system are the ‘ADMIN’ and the ‘MANAGERS’ of the

different departments allocated by the admin. The admin may be the owner of

the transportation organization or the manager of transportation department

of a particular manufacturing company.

If any other vehicle is added to the fleet which already exists for the

organization/department the details of the vehicle is added. The details

includes whether it is a new one or taken any loan/lease. Any employee is

newly appointed or the existing employee is taken off both the details are

maintained including their personal details and profession details.

The details of the maintenance that are being performed such as

repairs/services and to be performed are also maintained. The maintenance to

be performed can also be scheduled for each type of vehicles. The details of

the parts/inventory used for the vehicles are maintained. The reorder level

and the reorder quantity are predefined for each particular type of part.
The Vendors or suppliers of the vehicles, parts and performs maintenance

required for the vehicles. The particulars of the various vendors are

maintained in this system.

3. Existing System:

 Managing huge Fleet information manually is a tedious and error prone


task.
 In order to schedule vehicles as well as staff, we the scheduler should not
how many vehicles are there on board and available for allocation.
 Keeping track of repair information is a must as some times vehicles might
be referred for insurance.
 All these thing can not be achieved in existing system.

4. Proposed System:

 In order to avoid the limitations in the existing system, the current system
is being developed.
 All vehicle details will be automated along with the staff information.
 Scheduling of trips and repair information is being fully automated to
overcome chaos in the system.
5. Features:
The Functionalities provided by the Project are as follows.

• Vehicle
• Employee information
• Reports
• Parts
• Location
• Repairs
• Vender
• Maintenance

6. Modules:
The application comprises the following major modules.
• User Authentication
• Vehicle
• Inventory
• Employee information
• Maintenance
7. Requirements:

• Hardware requirements:

Content Description
HDD 20 GB Min
40 GB Recommended
RAM 1 GB Min
2 GB Recommended

• Software requirements:

Content Description
OS Windows XP with SP2 or Windows
Vista
Database MS-SQL server 2005
Technolo ASP.NET with VB.NET
gies
IDE Ms-Visual Studio .Net 2008
Browser IE
Paste Organization profile here
CONTENTS

1. INTRODUCTION

1.1. INTRODUCTION TO PROJECT


1.2. PURPOSE OF THE PROJECT
1.3. PROBLEM IN EXISTING SYSTEM
1.4. SOLUTION OF THESE PROBLEMS

2. SYSTEM ANALYSIS

2.1. STUDY OF THE SYSTEM


2.2. PROPOSED SYSTEM
2.3. INPUT & OUTPUT
2.4. PROCESS MODELS USED WITH JUSTIFICATION
2.5. SYSTEM ARCHITECTURE

3. FEASIBILITY STUDY

3.1. TECHNICAL FEASIBILITY


3.2. OPERATIONAL FEASIBILITY
3.3. ECONOMIC FEASIBILITY

4. SOFTWARE REQUIREMENT SPECIFICATIONS

4.1. FUNCIONAL REQUIREMENTS


4.2. PERFORMANCE REQUIREMENTS
4.3. HARDWARE REQUIREMENTS
4.4. SOFTWARE REQUIREMENTS
4.4.1. INTRODUCTION TO .NET FRAMEWORK
4.4.2. VB.NET

5. SYSTEM DESIGN

5.1 INTRODUCTION
5.2 DATA FLOW DIAGRAMS
5.3 UML DIAGRAMS

6. OUTPUT SCREENS
7. SYSTEM TESTING AND IMPLEMENTATION

7.1 INTRODUCTION TO TESTING


7.2 TESTING STRATEGIES
7.3 IMPLEMENTATION

8. SYSTEM SECURITY

8.1 INTRODUCTION
8.2 SECURITY IN SOFTWARE

9. CONCLUSION

10. FUTURE SCOPE

11. BIBLIOGRAPHY
1.1. INTRODUCTION & OBJECTIVE
The ‘Greyhound Fleet Manager’ keeps track the information about the

Vehicles, Maintenance, Repair, Parts, Employees, Location and Vendors. It also

keeps track the maintenance performed for different vehicles which are used

for transportation.

The super users of the system are the ‘ADMIN’ and the ‘MANAGERS’ of the

different departments allocated by the admin. The admin may be the owner of

the transportation organization or the manager of transportation department

of a particular manufacturing company.

If any other vehicle is added to the fleet which already exists for the

organization/department the details of the vehicle is added. The details

includes whether it is a new one or taken any loan/lease. Any employee is

newly appointed or the existing employee is taken off both the details are

maintained including their personal details and profession details.

The details of the maintenance that are being performed such as

repairs/services and to be performed are also maintained. The maintenance to

be performed can also be scheduled for each type of vehicles. The details of

the parts/inventory used for the vehicles are maintained. The reorder level

and the reorder quantity are predefined for each particular type of part.

The Vendors or suppliers of the vehicles, parts and performs maintenance

required for the vehicles. The particulars of the various vendors are

maintained in this system.


1.2. PURPOSE OF THE PROJECT
Purpose of developing this application is to streamline the vehicle
management process in the company and avoid any human errors. Tracking
of vehicle information manually, especially when hundreds vehicles are there
for the organization, is a tedious process. One of the major tasks is sending
the vehicles for servicing. In manual process there won’t be any provision to
know that the vehicle should be sent for servicing, unless the vehicle record
is verified manually.

1.3. PROBLEM IN EXISTING SYSTEM

• Managing huge Fleet information manually is a tedious and error prone task.
• In order to schedule vehicles as well as staff, we the scheduler should not
how many vehicles are there on board and available for allocation.
• Keeping track of repair information is a must as some times vehicles might
be referred for insurance.
These entire things can not be achieved in existing system.

1.4. SOLUTION OF THESE PROBLEMS

• In order to avoid the limitations in the existing system, the


current system is being developed.
• All vehicle details will be automated along with the staff
information.
• Scheduling of trips and repair information is being fully
automated to overcome chaos in the system.
2.1. STUDY OF THE SYSTEM
NUMBER OF MODULES

Current system is differentiated into the following modules which are closely
integrated with one another.

 User Authentication
 Vehicle
 Employee Information
 Maintenance
 Inventory

Vehicle Module:
• Adding vehicles and/or equipment is a simple process and doesn’t require a
wealth of information.
• As long as we have the year, make, model, current Mi/Km/Hr and the base
information, that is all you need.
• We can add a vehicle with the most basic information.

Employee Information:

• The application will keep track of many details include Employee number,
Name, personal information, License Information.
• It also allows you to edit your Employee Information. Under this module
there will be facility to add, delete, Modify the information regarding
Employee.

Maintenance Module:

• This module keeps track of the maintenance information.

Inventory Module:

Generate the following Reports:

Fleet General Information


Cost summary
PM Service
Maintenance Service (Last Performed)
Repair Maintenance
Part Inventory
Part Reorder
Vendor
Employee General Details

User Authentication:
• This Modules involves Administrator operations, involves with vehicles
registration, create users (employees), and authenticating the Employees.
Administrator maintains the entire project.

2.2. PROPOSED SYSTEM


The proposed system should possess the following features:
• Vehicle information management
• Fuel tracking management
• Repairing information management
• Parts maintenance
• Employee information management

2.3. INPUT AND OUTPUT


INPUT DESIGN
Input design is a part of overall system design. The main objective during the
input design is as given below:
• To produce a cost-effective method of input.
• To achieve the highest possible level of accuracy.
• To ensure that the input is acceptable and understood by the user.
INPUT STAGES:
The main input stages before the information gets stored in the database
media:
• Data recording
• Data transcription
• Data conversion
• Data verification
• Data control
• Data transmission
• Data validation
• Data correction

OUTPUT DESIGN

Outputs from computer systems are required primarily to communicate the results
of processing to users. They are also used to provide a permanent copy of the
results for later consultation. The various types of outputs in general are:
• External Outputs, whose destination is outside the organization.
• Internal Outputs whose destination is with in organization and they are the
 User’s main interface with the computer.
• Operational outputs whose use is purely with in the computer department.
• Interface outputs, which involve the user in communicating directly with

The outputs were needed to be generated as a hard copy and as well as


queries to be viewed on the screen. Keeping in view these outputs, the format for
the output is taken from the outputs, which are currently being obtained after
manual processing. The standard printer is to be used as output media for hard
copies.

2.4. PROCESS MODELS USED WITH JUSTIFICATION

SDLC MODEL:
Waterfall Model
Software products are oriented towards customers like any other
engineering products. It is either driver by market or it drives the market.
Customer Satisfaction was the main aim in the 1980's. Customer Delight is today's
logo and Customer Ecstasy is the new buzzword of the new millennium. Products
which are not customer oriented have no place in the market although they are
designed using the best technology. The front end of the product is as crucial as
the internal technology of the product.
A market study is necessary to identify a potential customer's need. This
process is also called as market research. The already existing need and the
possible future needs that are combined together for study. A lot of assumptions
are made during market study. Assumptions are the very important factors in the
development or start of a product's development. The assumptions which are not
realistic can cause a nosedive in the entire venture. Although assumptions are
conceptual, there should be a move to develop tangible assumptions to move
towards a successful product.
Once the Market study is done, the customer's need is given to the Research
and Development Department to develop a cost-effective system that could
potentially solve customer's needs better than the competitors. Once the system is
developed and tested in a hypothetical environment, the development team takes
control of it. The development team adopts one of the software development
models to develop the proposed system and gives it to the customers.
The basic popular models used by many software development firms are as
follows:
A) System Development Life Cycle (SDLC) Model
B) Prototyping Model
C) Rapid Application Development Model
D) Component Assembly Model
A) System Development Life Cycle Model (SDLC Model):

This is also called as Classic Life Cycle Model (or) Linear Sequential Model (or)
Waterfall Method. This model has the following activities.

1. System/Information Engineering and Modeling


2. Software Requirements Analysis
3. Systems Analysis and Design
4. Code Generation
5. Testing
6. Maintenance

1) System/Information Engineering and Modeling


As software development is large process so work begins by establishing
requirements for all system elements and then allocating some subset of these
requirements to software. The view of this system is necessary when software
must interface with other elements such as hardware, people and other resources.
System is the very essential requirement for the existence of software in any
entity. In some cases for maximum output, the system should be re-engineered
and spruced up. Once the ideal system is designed according to requirement, the
development team studies the software requirement for the system.

2) Software Requirement Analysis


Software Requirement Analysis is also known as feasibility study. In this
requirement analysis phase, the development team visits the customer and
studies their system requirement. They examine the need for possible software
automation in the given software system. After feasibility study, the development
team provides a document that holds the different specific recommendations for
the candidate system. It also consists of personnel assignments, costs of the
system, project schedule and target dates.
The requirements analysis and information gathering process is intensified
and focused specially on software. To understand what type of the programs to be
built, the system analyst must study the information domain for the software as
well as understand required function, behavior, performance and interfacing. The
main purpose of requirement analysis phase is to find the need and to define the
problem that needs to be solved.

3) System Analysis and Design


In System Analysis and Design phase, the whole software development
process, the overall software structure and its outlay are defined. In case of the
client/server processing technology, the number of tiers required for the package
architecture, the database design, the data structure design etc are all defined in
this phase. After designing part a software development model is created. Analysis
and Design are very important in the whole development cycle process. Any fault
in the design phase could be very expensive to solve in the software development
process. In this phase, the logical system of the product is developed.

4) Code Generation
In Code Generation phase, the design must be decoded into a machine-
readable form. If the design of software product is done in a detailed manner, code
generation can be achieved without much complication. For generation of code,
Programming tools like Compilers, Interpreters, and Debuggers are used. For
coding purpose different high level programming languages like C, C++, Pascal
and Java are used. The right programming language is chosen according to the
type of application.

5)Testing
After code generation phase the software program testing begins. Different
testing methods are available to detect the bugs that were committed during the
previous phases. A number of testing tools and methods are already available for
testing purpose.
6) Maintenance
Software will definitely go through change once when it is delivered to the
customer. There are large numbers of reasons for the change. Change could
happen due to some unpredicted input values into the system. In addition to this
the changes in the system directly have an effect on the software operations. The
software should be implemented to accommodate changes that could be happen
during the post development period.

DESIGN PRINCIPLES & METHODOLOGY:


Object Oriented Analysis And Design
When Object orientation is used in analysis as well as design, the boundary
between OOA and OOD is blurred. This is particularly true in methods that
combine analysis and design. One reason for this blurring is the similarity of basic
constructs (i.e.,objects and classes) that are used in OOA and OOD. Through there
is no agreement about what parts of the object-oriented development process
belongs to analysis and what parts to design, there is some general agreement
about the domains of the two activities.
The fundamental difference between OOA and OOD is that the former
models the problem domain, leading to an understanding and specification of the
problem, while the latter models the solution to the problem. That is, analysis
deals with the problem domain, while design deals with the solution domain.
However, in OOAD subsumed in the solution domain representation. That is, the
solution domain representation, created by OOD, generally contains much of the
representation created by OOA. The separating line is matter of perception, and
different people have different views on it. The lack of clear separation between
analysis and design can also be considered one of the strong points of the object-
oriented approach the transition from analysis to design is “seamless”. This is also
the main reason OOAD methods-where analysis and designs are both performed.
The main difference between OOA and OOD, due to the different domains of
modeling, is in the type of objects that come out of the analysis and design
process.

Features of OOAD:
• It users Objects as building blocks of the application rather functions
• All objects can be represented graphically including the relation between
them.
• All Key Participants in the system will be represented as actors and the
actions done by them will be represented as use cases.
• A typical use case is nothing bug a systematic flow of series of events
which can be well described using sequence diagrams and each event can
be described diagrammatically by Activity as well as state chart diagrams.
• So the entire system can be well described using OOAD model, hence this
model is chosen as SDLC model.

THE GENESIS OF UML:


Software engineering has slowly become part of our everyday life. From
washing machines to compact disc player, through cash machines and phones,
most of our daily activities use software, and as time goes by, the more complex
and costly this software becomes.
The demand for sophisticated software greatly increases the constraints
imposed on development teams. Software engineers are facing a world of growing
complexity due to the nature of applications, the distributed and heterogeneous
environments, the size of programs, the organization of software development
teams, and the end-users ergonomic expectations.
To surmount these difficulties, software engineers will have to learn not only
how to do their job, but also how to explain their work to others, and how to
understand when others work is explained to them. For these reasons, they have
(and will always have) an increasing need for methods.

From Functional to Object-Oriented Methods


Although object-oriented methods have roots that are strongly anchored
back in the 60s, structured and functional methods were the first to be used. This
is not very surprising, since functional methods are inspired directly my computer
architecture (a proven domain well known to computer scientists). The separation
of data and code, just as exists physically in the hardware, was translated into the
methods; this is how computer scientists got into the habit of thinking in terms of
system functions.
This approach is natural when looked at in its historical context, but today,
because of its lack of abstraction, it has become almost completely anachronistic.
There is no reason to impose the underlying hardware on a software solution.
Hardware should act as the servant of the software that is executed on it, rather
than imposing architectural constraints.

TOWARDS A UNIFIED MODELLING LANGUAGE


The unification of object-oriented modeling methods became possible as
experience allowed evaluation of the various concepts proposed by existing
methods. Based on the fact that differences between the various methods were
becoming smaller, and that the method wars did not move object-oriented
technology forward any longer, Jim Rumbaugh and Grady Booch decided at the
end of 1994 to unify their work within a single method: the Unified Method. A year
later they were joined by Ivar Jacobson, the father of use cases, a very efficient
technique for the determination of requirements.

Booch, Rumbaugh and Jacobson adopted four goals:


• To represent complete systems (instead of only the software portion) using
object oriented concepts.
• To establish an explicit coupling between concepts and executable code.
• To take into account the scaling factors that are inherent to complex and
critical systems.
• To creating a modeling language usable by both humans and machines.
2.5. SYSTEM ARCHITECTURE

The current application is being developed by taking the 3-tier


architecture as a prototype. In a three-tier architecture (also known as a multi-tier
architecture), there are three or more interacting tiers, each with its own specific
responsibilities:

Three-Tier Architecture
• Tier 1: the client contains the presentation logic, including simple control and
user input validation. This application is also known as a thin client. The
client interface is developed using ASP.Net Server Controls and HTML
controls in some occasions
• Tier 2: the middle tier is also known as the application server, which provides
the business processes logic and the data access. The business logic/
business rules can be written either with C#.Net or VB.Net languages. These
business runes will be deployed as DLL’s in IIS web server.
• Tier 3: the data server provides the business data. MS-SQL server acts as
Tier-3, which is the database layer.

These are some of the advantages of three-tier architecture:


• It is easier to modify or replace any tier without affecting the other tiers.
• Separating the application and database functionality means better load
balancing.
• Adequate security policies can be enforced within the server tiers without
hindering the clients.

The proposed system can be designed perfectly with the three tier model, as all
layers are perfectly getting set as part of the project. In the future, while
expanding the system, in order to implement integration touch points and to
provide enhanced user interfaces, the n-tier architecture can be used.
3.1 TECHINICAL FEASIBILITY:
Evaluating the technical feasibility is the trickiest part of a feasibility study.
This is because, at this point in time, not too many detailed design of the system,
making it difficult to access issues like performance, costs on (on account of the
kind of technology to be deployed) etc.
A number of issues have to be considered while doing a technical analysis.
Understand the different technologies involved in the proposed system:
Before commencing the project, we have to be very clear about what are the
technologies that are to be required for the development of the new system.
Find out whether the organization currently possesses the required technologies:
Is the required technology available with the organization?
If so is the capacity sufficient?
For instance –
“Will the current printer be able to handle the new reports and forms
required for the new system?”

3.2 OPERATIONAL FEASIBILITY:


Proposed projects are beneficial only if they can be turned into information
systems that will meet the organizations operating requirements. Simply stated,
this test of feasibility asks if the system will work when it is developed and
installed. Are there major barriers to Implementation? Here are questions that will
help test the operational feasibility of a project:
Is there sufficient support for the project from management from users? If the
current system is well liked and used to the extent that persons will not be able to
see reasons for change, there may be resistance.
Are the current business methods acceptable to the user? If they are not, Users
may welcome a change that will bring about a more operational and useful
systems.
Have the user been involved in the planning and development of the project? Early
involvement reduces the chances of resistance to the system and in General and
increases the likelihood of successful project.
Since the proposed system was to help reduce the hardships encountered In
the existing manual system, the new system was considered to be operational
feasible.

3.3 ECONOMIC FEASIBILITY:


It refers to the benefits or Outcomes we are deriving from the product as
compared to the total cost we are spending for developing the product. If the
benefits are more or less the same as the older system, then it is not feasible to
develop the product.
In the present system, the development of new product greatly enhances
the accuracy of the system and cuts short the delay in the processing of Birth
and Death application. The errors can be greatly reduced and at the same time
providing a great level of security. Here we don’t need any additional
equipment except memory of required capacity.

No need for spending money on client for maintenance because the


database used is web enabled database.
INTRODUCTION

Purpose: The main purpose for preparing this document is to give a general insight into
the analysis and requirements of the existing system or situation and for determining the
operating characteristics of the system.

Scope: This Document plays a vital role in the development life cycle (SDLC) and it
describes the complete requirement of the system. It is meant for use by the developers
and will be the basic during testing phase. Any changes made to the requirements in the
future will have to go through formal change approval process.

DEVELOPERS RESPONSIBILITIES OVERVIEW:

The developer is responsible for:

• Developing the system, which meets the SRS and solving all the requirements of
the system?
• Demonstrating the system and installing the system at client's location after the
acceptance testing is successful.
• Submitting the required user manual describing the system interfaces to work on it
and also the documents of the system.
• Conducting any user training that might be needed for using the system.
• Maintaining the system for a period of one year after installation.

4.1. FUNCTIONAL REQUIREMENTS:


The proposed system should possess the following features:
• Vehicle information management
• Fuel tracking management
• Repairing information management
• Parts maintenance
• Employee information management
4.2. PERFORMANCE REQUIREMENTS
Performance is measured in terms of the output provided by the application.

Requirement specification plays an important part in the analysis of a system. Only


when the requirement specifications are properly given, it is possible to design a system,
which will fit into required environment. It rests largely in the part of the users of the
existing system to give the requirement specifications because they are the people who
finally use the system. This is because the requirements have to be known during the
initial stages so that the system can be designed according to those requirements. It is
very difficult to change the system once it has been designed and on the other hand
designing a system, which does not cater to the requirements of the user, is of no use.

The requirement specification for any system can be broadly stated as given below:
• The system should be able to interface with the existing system
• The system should be accurate
• The system should be better than the existing system

The existing system is completely dependent on the user to perform all the duties.

4.3. HARDWARE REQUIREMENTS


Content Description
HDD 20 GB Min
40 GB Recommended
RAM 1 GB Min
2 GB Recommended

4.4. SOFTWARE REQUIREMENTS


Content Description
OS Windows XP with SP2 or Windows
Vista
Technolo ASP.NET with VB.NET
gies
IDE MS-Visual Studio .Net 2008
Browser IE
4.4.1. INTRODUCTION TO .NET Framework
The .NET Framework is a new computing platform that simplifies application
development in the highly distributed environment of the Internet. The .NET Framework is
designed to fulfill the following objectives:

• To provide a consistent object-oriented programming environment whether object


code is stored and executed locally, executed locally but Internet-distributed, or
executed remotely.
• To provide a code-execution environment that minimizes software deployment and
versioning conflicts.
• To provide a code-execution environment that guarantees safe execution of code,
including code created by an unknown or semi-trusted third party.
• To provide a code-execution environment that eliminates the performance problems
of scripted or interpreted environments.
• To make the developer experience consistent across widely varying types of
applications, such as Windows-based applications and Web-based applications.
• To build all communication on industry standards to ensure that code based on
the .NET Framework can integrate with any other code.
The .NET Framework has two main components: the common language runtime and
the .NET Framework class library. The common language runtime is the foundation of
the .NET Framework. You can think of the runtime as an agent that manages code at
execution time, providing core services such as memory management, thread
management, and Remoting, while also enforcing strict type safety and other forms of
code accuracy that ensure security and robustness. In fact, the concept of code
management is a fundamental principle of the runtime. Code that targets the runtime is
known as managed code, while code that does not target the runtime is known as
unmanaged code. The class library, the other main component of the .NET Framework, is a
comprehensive, object-oriented collection of reusable types that you can use to develop
applications ranging from traditional command-line or graphical user interface (GUI)
applications to applications based on the latest innovations provided by ASP.NET, such as
Web Forms and XML Web services.
The .NET Framework can be hosted by unmanaged components that load the
common language runtime into their processes and initiate the execution of managed
code, thereby creating a software environment that can exploit both managed and
unmanaged features. The .NET Framework not only provides several runtime hosts, but
also supports the development of third-party runtime hosts.

For example, ASP.NET hosts the runtime to provide a scalable, server-side


environment for managed code. ASP.NET works directly with the runtime to enable Web
Forms applications and XML Web services, both of which are discussed later in this topic.

Internet Explorer is an example of an unmanaged application that hosts the runtime


(in the form of a MIME type extension). Using Internet Explorer to host the runtime enables
you to embed managed components or Windows Forms controls in HTML documents.
Hosting the runtime in this way makes managed mobile code (similar to Microsoft®
ActiveX® controls) possible, but with significant improvements that only managed code
can offer, such as semi-trusted execution and secure isolated file storage.

The following illustration shows the relationship of the common language runtime
and the class library to your applications and to the overall system. The illustration also
shows how managed code operates within a larger architecture.

FEATURES OF THE COMMON LANGUAGE RUNTIME


The common language runtime manages memory, thread execution, code
execution, code safety verification, compilation, and other system services. These features
are intrinsic to the managed code that runs on the common language runtime.
With regards to security, managed components are awarded varying degrees of
trust, depending on a number of factors that include their origin (such as the Internet,
enterprise network, or local computer). This means that a managed component might or
might not be able to perform file-access operations, registry-access operations, or other
sensitive functions, even if it is being used in the same active application.
The runtime enforces code access security. For example, users can trust that an
executable embedded in a Web page can play an animation on screen or sing a song, but
cannot access their personal data, file system, or network. The security features of the
runtime thus enable legitimate Internet-deployed software to be exceptionally featuring
rich.
The runtime also enforces code robustness by implementing a strict type- and code-
verification infrastructure called the common type system (CTS). The CTS ensures that all
managed code is self-describing. The various Microsoft and third-party language compilers
Generate managed code that conforms to the CTS. This means that managed code
can consume other managed types and instances, while strictly enforcing type fidelity and
type safety.
In addition, the managed environment of the runtime eliminates many common
software issues. For example, the runtime automatically handles object layout and
manages references to objects, releasing them when they are no longer being used. This
automatic memory management resolves the two most common application errors,
memory leaks and invalid memory references.
The runtime also accelerates developer productivity. For example, programmers
can write applications in their development language of choice, yet take full advantage of
the runtime, the class library, and components written in other languages by other
developers. Any compiler vendor who chooses to target the runtime can do so. Language
compilers that target the .NET Framework make the features of the .NET Framework
available to existing code written in that language, greatly easing the migration process
for existing applications.
While the runtime is designed for the software of the future, it also supports
software of today and yesterday. Interoperability between managed and unmanaged code
enables developers to continue to use necessary COM components and DLLs.
The runtime is designed to enhance performance. Although the common language
runtime provides many standard runtime services, managed code is never interpreted. A
feature called just-in-time (JIT) compiling enables all managed code to run in the native
machine language of the system on which it is executing. Meanwhile, the memory
manager removes the possibilities of fragmented memory and increases memory locality-
of-reference to further increase performance.
Finally, the runtime can be hosted by high-performance, server-side applications,
such as Microsoft® SQL Server™ and Internet Information Services (IIS). This
infrastructure enables you to use managed code to write your business logic, while still
enjoying the superior performance of the industry's best enterprise servers that support
runtime hosting.
.NET FRAMEWORK CLASS LIBRARY

The .NET Framework class library is a collection of reusable types that tightly
integrate with the common language runtime. The class library is object oriented,
providing types from which your own managed code can derive functionality. This not only
makes the .NET Framework types easy to use, but also reduces the time associated with
learning new features of the .NET Framework. In addition, third-party components can
integrate seamlessly with classes in the .NET Framework.
For example, the .NET Framework collection classes implement a set of interfaces
that you can use to develop your own collection classes. Your collection classes will blend
seamlessly with the classes in the .NET Framework.
As you would expect from an object-oriented class library, the .NET Framework
types enable you to accomplish a range of common programming tasks, including tasks
such as string management, data collection, database connectivity, and file access. In
addition to these common tasks, the class library includes types that support a variety of
specialized development scenarios. For example, you can use the .NET Framework to
develop the following types of applications and services:
• Console applications.
• Scripted or hosted applications.
• Windows GUI applications (Windows Forms).
• ASP.NET applications.
• XML Web services.
• Windows services.
For example, the Windows Forms classes are a comprehensive set of reusable types
that vastly simplify Windows GUI development. If you write an ASP.NET Web Form
application, you can use the Web Forms classes.

CLIENT APPLICATION DEVELOPMENT

Client applications are the closest to a traditional style of application in Windows-


based programming. These are the types of applications that display windows or forms on
the desktop, enabling a user to perform a task. Client applications include applications
such as word processors and spreadsheets, as well as custom business applications such
as data-entry tools, reporting tools, and so on. Client applications usually employ windows,
menus, buttons, and other GUI elements, and they likely access local resources such as
the file system and peripherals such as printers.
Another kind of client application is the traditional ActiveX control (now replaced by
the managed Windows Forms control) deployed over the Internet as a Web page. This
application is much like other client applications: it is executed natively, has access to
local resources, and includes graphical elements.
In the past, developers created such applications using C/C++ in conjunction with
the Microsoft Foundation Classes (MFC) or with a rapid application development (RAD)
environment such as Microsoft® Visual Basic®. The .NET Framework incorporates aspects
of these existing products into a single, consistent development environment that
drastically simplifies the development of client applications.
The Windows Forms classes contained in the .NET Framework are designed to be
used for GUI development. You can easily create command windows, buttons, menus,
toolbars, and other screen elements with the flexibility necessary to accommodate shifting
business needs.
For example, the .NET Framework provides simple properties to adjust visual
attributes associated with forms. In some cases the underlying operating system does not
support changing these attributes directly, and in these cases the .NET Framework
automatically recreates the forms. This is one of many ways in which the .NET Framework
integrates the developer interface, making coding simpler and more consistent.
Unlike ActiveX controls, Windows Forms controls have semi-trusted access to a
user's computer. This means that binary or natively executing code can access some of
the resources on the user's system (such as GUI elements and limited file access) without
being able to access or compromise other resources. Because of code access security,
many applications that once needed to be installed on a user's system can now be safely
deployed through the Web. Your applications can implement the features of a local
application while being deployed like a Web page.
4.4.2 VB.NET

The Microsoft Visual Basic programming language is a high-level


programming language for the Microsoft .NET Framework. Although it is designed
to be an approachable and easy-to-learn language, it is also powerful enough to
satisfy the needs of experienced programmers. The Visual Basic programming
language has a syntax that is similar to English, which promotes the clarity and
readability of Visual Basic code. Wherever possible, meaningful words or phrases
are used instead of abbreviations, acronyms, or special characters. Extraneous or
unneeded syntax is generally allowed but not required.
The Visual Basic programming language can be either a strongly typed or a
loosely typed language. Loose typing defers much of the burden of type checking
until a program is already running. This includes not only type checking of
conversions but also of method calls, meaning that the binding of a method call
can be deferred until run-time. This is useful when building prototypes or other
programs in which speed of development is more important than execution speed.
The Visual Basic programming language also provides strongly typed semantics
that performs all type checking at compile-time and disallows run-time binding of
method calls. This guarantees maximum performance and helps ensure that type
conversions are correct. This is useful when building production applications in
which speed of execution and execution correctness is important.

Declarations
A Visual Basic program is made up of named entities. These entities are
introduced through declarations and represent the "meaning" of the program.
At a top level, namespaces are entities that organize other entities, such as
nested namespaces and types. Types are entities that describe values and define
executable code. Types may contain nested types and type members. Type
members are constants, variables, methods, operators, properties, events,
enumeration values, and constructors.
An entity that can contain other entities defines a declaration space. Entities are
introduced into a declaration space either through declarations or inheritance; the
containing declaration space is called the entities' declaration context. Declaring
an entity in a declaration space in turn defines a new declaration space that can
contain further nested entity declarations; thus, the declarations in a program form
a hierarchy of declaration spaces.
Except in the case of overloaded type members, it is invalid for declarations to
introduce identically named entities of the same kind into the same declaration
context. Additionally, a declaration space may never contain different kinds of
entities with the same name; for example, a declaration space may never contain
a variable and a method by the same name.
Annotation
It may be possible in other languages to create a declaration space that
contains different kinds of entities with the same name (for example, if the
language is case sensitive and allows different declarations based on casing). In
that situation, the most accessible entity is considered bound to that name; if more
than one type of entity is most accessible then the name is ambiguous. Public is
more accessible than Protected Friend, Protected Friend is more accessible than
Protected or Friend, and Protected or Friend is more accessible than Private.

The declaration space of a namespace is "open ended," so two namespace


declarations with the same fully qualified name contribute to the same declaration
space. In the example below, the two namespace declarations contribute to the
same declaration space, in this case declaring two classes with the fully qualified
names Data.Customer and Data.Order:
Namespace Data
Class Customer
End Class
End Namespace

Namespace Data
Class Order
End Class
End Namespace
Because the two declarations contribute to the same declaration space, a compile-
time error would occur if each contained a declaration of a class with the same
name.
Overloading and Signatures
The only way to declare identically named entities of the same kind in a
declaration space is through overloading. Only methods, operators, instance
constructors, and properties may be overloaded.
Overloaded type members must possess unique signatures. The signature of a
type member consists of the name of the type member, the number of type
parameters, and the number and types of the member's parameters. Conversion
operators also include the return type of the operator in the signature.
The following are not part of a member's signature, and hence cannot be
overloaded on:
• Modifiers to a type member (for example, Shared or Private).
• Modifiers to a parameter (for example, ByVal or ByRef).
• The names of the parameters.
• The return type of a method or operator (except for conversion operators) or
the element type of a property.
• Constraints on a type parameter.
The following example shows a set of overloaded method declarations along
with their signatures. This declaration would not be valid since several of the
method declarations have identical signatures.
Interface ITest
Sub F1() ' Signature is F1().
Sub F2(x As Integer) ' Signature is F2(Integer).
Sub F3(ByRef x As Integer) ' Signature is F3(Integer).
Sub F4(x As Integer, y As Integer) ' Signature is
F4(Integer, Integer).
Function F5(s As String) As Integer ' Signature is F5(String).
Function F6(x As Integer) As Integer ' Signature is F6(Integer).
Sub F7(a() As String) ' Signature is F7(String()).
Sub F8(ParamArray a() As String) ' Signature is F8(String()).
Sub F9(Of T)() ' Signature is F9!1().
Sub F10(Of T, U)(x As T, y As U) ' Signature is F10!2(!1, !2)
Sub F11(Of U, T)(x As T, y As U) ' Signature is F11!2(!2, !1)
Sub F12(Of T)(x As T) ' Signature is F12!1(!1)
Sub F13(Of T As IDisposable)(x As T) ' Signature is F13!1(!1)
End Interface
A method with optional parameters is considered to have multiple signatures,
one for each set of parameters that can be passed in by the caller. For example,
the following method has three corresponding signatures:
Sub F(x As Short, _
Optional y As Integer = 10, _
Optional z As Long = 20)
These are the method's signatures:
• F(Short)
• F(Short, Integer)
• F(Short, Integer, Long)
It is valid to define a generic type that may contain members with identical
signatures based on the type arguments supplied. Overload resolution rules are
used to try and disambiguate between such overloads, although there may be
situations in which it is impossible to disambiguate. For example:
Class C(Of T)
Sub F(x As Integer)
End Sub

Sub F(x As T)
End Sub

Sub G(Of U)(x As T, y As U)


End Sub

Sub G(Of U)(x As U, y As T)


End Sub
End Class

Module Test
Sub Main()
Dim x As New C(Of Integer)
x.F(10) ' Calls C(Of T).F(Integer)
x.G(Of Integer)(10, 10) ' Error: Can't choose between overloads
End Sub
End Module
Shadowing
A derived type shadows the name of an inherited type member by re-

declaring it. Shadowing a name does not remove the inherited type members with
that name; it merely makes all of the inherited type members with that name

unavailable in the derived class. The shadowing declaration may be any type of

entity.

Entities than can be overloaded can choose one of two forms of shadowing.

Shadowing by name is specified using the Shadows keyword. An entity that

shadows by name hides everything by that name in the base class, including all

overloads. Shadowing by name and signature is specified using the Overloads

keyword. An entity that shadows by name and signature hides everything by that

name with the same signature as the entity.

ADO.NET OVERVIEW

ADO.NET is an evolution of the ADO data access model that directly


addresses user requirements for developing scalable applications. It was designed
specifically for the web with scalability, statelessness, and XML in mind.

ADO.NET uses some ADO objects, such as the Connection and Command
objects, and also introduces new objects. Key new ADO.NET objects include the
DataSet, DataReader, and DataAdapter.

The important distinction between this evolved stage of ADO.NET and


previous data architectures is that there exists an object -- the DataSet -- that is
separate and distinct from any data stores. Because of that, the DataSet functions
as a standalone entity. You can think of the DataSet as an always disconnected
recordset that knows nothing about the source or destination of the data it
contains. Inside a DataSet, much like in a database, there are tables, columns,
relationships, constraints, views, and so forth.

A DataAdapter is the object that connects to the database to fill the


DataSet. Then, it connects back to the database to update the data there, based
on operations performed while the DataSet held the data. In the past, data
processing has been primarily connection-based. Now, in an effort to make multi-
tiered apps more efficient, data processing is turning to a message-based
approach that revolves around chunks of information. At the center of this
approach is the DataAdapter, which provides a bridge to retrieve and save data
between a DataSet and its source data store. It accomplishes this by means of
requests to the appropriate SQL commands made against the data store.
The XML-based DataSet object provides a consistent programming model
that works with all models of data storage: flat, relational, and hierarchical. It does
this by having no 'knowledge' of the source of its data, and by representing the
data that it holds as collections and data types. No matter what the source of the
data within the DataSet is, it is manipulated through the same set of standard
APIs exposed through the DataSet and its subordinate objects.
While the DataSet has no knowledge of the source of its data, the managed
provider has detailed and specific information. The role of the managed provider is
to connect, fill, and persist the DataSet to and from data stores. The OLE DB and
SQL Server .NET Data Providers (System.Data.OleDb and System.Data.SqlClient)
that are part of the .Net Framework provide four basic objects: the Command,
Connection, DataReader and DataAdapter. In the remaining sections of this
document, we'll walk through each part of the DataSet and the OLE DB/SQL
Server .NET Data Providers explaining what they are, and how to program against
them.
The following sections will introduce you to some objects that have evolved,
and some that are new. These objects are:

• Connections. For connection to and managing transactions against a


database.
• Commands. For issuing SQL commands against a database.
• DataReaders. For reading a forward-only stream of data records from a SQL
Server data source.
• DataSets. For storing, Remoting and programming against flat data, XML
data and relational data.
• DataAdapters. For pushing data into a DataSet, and reconciling data
against a database.
When dealing with connections to a database, there are two different
options: SQL Server .NET Data Provider (System.Data.SqlClient) and OLE DB .NET
Data Provider (System.Data.OleDb). In these samples we will use the SQL Server
.NET Data Provider. These are written to talk directly to Microsoft SQL Server. The
OLE DB .NET Data Provider is used to talk to any OLE DB provider (as it uses OLE
DB underneath).

Connections:
Connections are used to 'talk to' databases, and are represented by
provider-specific classes such as SqlConnection. Commands travel over
connections and resultsets are returned in the form of streams which can be read
by a DataReader object, or pushed into a DataSet object.

Commands:
Commands contain the information that is submitted to a database, and are
represented by provider-specific classes such as SqlCommand. A command can
be a stored procedure call, an UPDATE statement, or a statement that returns
results. You can also use input and output parameters, and return values as part of
your command syntax. The example below shows how to issue an INSERT
statement against the Northwind database.

DataReaders:
The DataReader object is somewhat synonymous with a read-only/forward-only
cursor over data. The DataReader API supports flat as well as hierarchical data. A
DataReader object is returned after executing a command against a database.
The format of the returned DataReader object is different from a recordset. For
example, you might use the DataReader to show the results of a search list in a
web page.

DATASETS AND DATAADAPTERS:


DataSets
The DataSet object is similar to the ADO Recordset object, but more powerful,
and with one other important distinction: the DataSet is always disconnected. The
DataSet object represents a cache of data, with database-like structures such as
tables, columns, relationships, and constraints. However, though a DataSet can
and does behave much like a database, it is important to remember that DataSet
objects do not interact directly with databases, or other source data. This allows
the developer to work with a programming model that is always consistent,
regardless of where the source data resides. Data coming from a database, an XML
file, from code, or user input can all be placed into DataSet objects. Then, as
changes are made to the DataSet they can be tracked and verified before
updating the source data. The GetChanges method of the DataSet object
actually creates a second DatSet that contains only the changes to the data. This
DataSet is then used by a DataAdapter (or other objects) to update the original
data source.
The DataSet has many XML characteristics, including the ability to produce and
consume XML data and XML schemas. XML schemas can be used to describe
schemas interchanged via WebServices. In fact, a DataSet with a schema can
actually be compiled for type safety and statement completion.

DATAADAPTERS (OLEDB/SQL)

The DataAdapter object works as a bridge between the DataSet and the
source data. Using the provider-specific SqlDataAdapter (along with its
associated SqlCommand and SqlConnection) can increase overall performance
when working with a Microsoft SQL Server databases. For other OLE DB-supported
databases, you would use the OleDbDataAdapter object and its associated
OleDbCommand and OleDbConnection objects.
The DataAdapter object uses commands to update the data source after
changes have been made to the DataSet. Using the Fill method of the
DataAdapter calls the SELECT command; using the Update method calls the
INSERT, UPDATE or DELETE command for each changed row. You can explicitly set
these commands in order to control the statements used at runtime to resolve
changes, including the use of stored procedures. For ad-hoc scenarios, a
CommandBuilder object can generate these at run-time based upon a select
statement. However, this run-time generation requires an extra round-trip to the
server in order to gather required metadata, so explicitly providing the INSERT,
UPDATE, and DELETE commands at design time will result in better run-time
performance.
1. ADO.NET is the next evolution of ADO for the .Net Framework.
2. ADO.NET was created with n-Tier, statelessness and XML in the forefront.
Two new objects, the DataSet and DataAdapter, are provided for these
scenarios.
3. ADO.NET can be used to get data from a stream, or to store data in a cache
for updates.
4. There is a lot more information about ADO.NET in the documentation.
5. Remember, you can execute a command directly against the database in
order to do inserts, updates, and deletes. You don't need to first put data into a
DataSet in order to insert, update, or delete it.
6. Also, you can use a DataSet to bind to the data, move through the data, and
navigate data relationships

Database Management System

A database management system (DBMS) is computer software designed for


the purpose of managing databases. Typical examples of DBMSs include Oracle,
DB2, Microsoft Access, Microsoft SQL Server, Firebird, PostgreSQL, MySQL, SQLite,
FileMaker and Sybase Adaptive Server Enterprise. DBMSs are typically used by
Database administrators in the creation of Database systems.
Description
A DBMS is a complex set of software programs that controls the organization,
storage, management, and retrieval of data in a database. A DBMS includes:
A modeling language to define the schema of each database hosted in the
DBMS, according to the DBMS data model.
The four most common types of organizations are the hierarchical, network,
relational and object models. Inverted lists and other methods are also used. A
given database management system may provide one or more of the four models.
The optimal structure depends on the natural organization of the application's
data, and on the application's requirements (which include transaction rate
(speed), reliability, maintainability, scalability, and cost).
The dominant model in use today is the ad hoc one embedded in SQL,
despite the objections of purists who believe this model is a corruption of the
relational model, since it violates several of its fundamental principles for the sake
of practicality and performance. Many DBMSs also support the Open Database
Connectivity API that supports a standard way for programmers to access the
DBMS.
Data structures (fields, records, files and objects) optimized to deal with very
large amounts of data stored on a permanent data storage device (which implies
relatively slow access compared to volatile main memory).
A database query language and report writer to allow users to interactively
interrogate the database, analyze its data and update it according to the users
privileges on data. It also controls the security of the database.
Data security prevents unauthorized users from viewing or updating the
database. Using passwords, users are allowed access to the entire database or
subsets of it called subschemas. For example, an employee database can contain
all the data about an individual employee, but one group of users may be
authorized to view only payroll data, while others are allowed access to only work
history and medical data.
If the DBMS provides a way to interactively enter and update the database,
as well as interrogate it, this capability allows for managing personal databases.
However, it may not leave an audit trail of actions or provide the kinds of controls
necessary in a multi-user organization. These controls are only available when a
set of application programs are customized for each data entry and updating
function.
A transaction mechanism, that ideally would guarantee the ACID properties,
in order to ensure data integrity, despite concurrent user accesses (concurrency
control), and faults (fault tolerance).

It also maintains the integrity of the data in the database.


The DBMS can maintain the integrity of the database by not allowing more
than one user to update the same record at the same time. The DBMS can help
prevent duplicate records via unique index constraints; for example, no two
customers with the same customer numbers (key fields) can be entered into the
database. See ACID properties for more information (Redundancy avoidance).
The DBMS accepts requests for data from the application program and
instructs the operating system to transfer the appropriate data.
When a DBMS is used, information systems can be changed much more
easily as the organization's information requirements change. New categories of
data can be added to the database without disruption to the existing system.
Organizations may use one kind of DBMS for daily transaction processing and
then move the detail onto another computer that uses another DBMS better suited
for random inquiries and analysis. Overall systems design decisions are performed
by data administrators and systems analysts. Detailed database design is
performed by database administrators.
Database servers are specially designed computers that hold the actual
databases and run only the DBMS and related software. Database servers are
usually multiprocessor computers, with RAID disk arrays used for stable storage.
Connected to one or more servers via a high-speed channel, hardware database
accelerators are also used in large volume transaction processing environments.
DBMSs are found at the heart of most database applications. Sometimes
DBMSs are built around a private multitasking kernel with built-in networking
support although nowadays these functions are left to the operating system.
Features and capabilities of DBMS
One can characterize a DBMS as an "attribute management system" where
attributes are small chunks of information that describe something. For example,
"colour" is an attribute of a car. The value of the attribute may be a color such as
"red", "blue" or "silver".
Alternatively, and especially in connection with the relational model of
database management, the relation between attributes drawn from a specified set
of domains can be seen as being primary. For instance, the database might
indicate that a car that was originally "red" might fade to "pink" in time, provided it
was of some particular "make" with an inferior paint job. Such higher relationships
provide information on all of the underlying domains at the same time, with none
of them being privileged above the others.
Throughout recent history specialized databases have existed for scientific,
geospatial, imaging, document storage and like uses. Functionality drawn from
such applications has lately begun appearing in mainstream DBMSs as well.
However, the main focus there, at least when aimed at the commercial data
processing market, is still on descriptive attributes on repetitive record structures.
Thus, the DBMSs of today roll together frequently-needed services or
features of attribute management. By externalizing such functionality to the DBMS,
applications effectively share code with each other and are relieved of much
internal complexity. Features commonly offered by database management
systems include:

Query ability

Querying is the process of requesting attribute information from various


perspectives and combinations of factors. Example: "How many 2-door cars in
Texas are green?"
A database query language and report writer to allow users to interactively
interrogate the database, analyze its data and update it according to the users
privileges on data. It also controls the security of the database. Data security
prevents unauthorized users from viewing or updating the database. Using
passwords, users are allowed access to the entire database or subsets of it called
subschemas.
For example, an employee database can contain all the data about an
individual employee, but one group of users may be authorized to view only payroll
data, while others are allowed access to only work history and medical data. If the
DBMS provides a way to interactively enter and update the database, as well as
interrogate it, this capability allows for managing personal databases. However it
may not leave an audit trail of actions or provide the kinds of controls necessary in
a multi-user organization. These controls are only available when a set of
application programs are customized for each data entry and updating function.
5.1. INTRODUCTION
Software design sits at the technical kernel of the software engineering
process and is applied regardless of the development paradigm and area of
application. Design is the first step in the development phase for any engineered
product or system. The designer’s goal is to produce a model or representation of
an entity that will later be built. Beginning, once system requirement have been
specified and analyzed, system design is the first of the three technical activities
-design, code and test that is required to build and verify software.
The importance can be stated with a single word “Quality”. Design is the
place where quality is fostered in software development. Design provides us with
representations of software that can assess for quality. Design is the only way that
we can accurately translate a customer’s view into a finished software product or
system. Software design serves as a foundation for all the software engineering
steps that follow. Without a strong design we risk building an unstable system –
one that will be difficult to test, one whose quality cannot be assessed until the last
stage.
During design, progressive refinement of data structure, program structure,
and procedural details are developed reviewed and documented. System design
can be viewed from either technical or project management perspective. From the
technical point of view, design is comprised of four activities – architectural design,
data structure design, interface design and procedural design.

5.2. DATA FLOW DIAGRAMS

A data flow diagram is graphical tool used to describe and analyze


movement of data through a system. These are the central tool and the basis from
which the other components are developed. The transformation of data from input
to output, through processed, may be described logically and independently of
physical components associated with the system. These are known as the logical
data flow diagrams. The physical data flow diagrams show the actual implements
and movement of data between people, departments and workstations. A full
description of a system actually consists of a set of data flow diagrams. Using two
familiar notations Yourdon, Gane and Sarson notation develops the data flow
diagrams. Each component in a DFD is labeled with a descriptive name. Process is
further identified with a number that will be used for identification purpose. The
development of DFD’S is done in several levels. Each process in lower level
diagrams can be broken down into a more detailed DFD in the next level. The lop-
level diagram is often called context diagram. It consists a single process bit, which
plays vital role in studying the current system. The process in the context level
diagram is exploded into other process at the first level DFD.
The idea behind the explosion of a process into more process is that
understanding at one level of detail is exploded into greater detail at the next
level. This is done until further explosion is necessary and an adequate amount of
detail is described for analyst to understand the process.
Larry Constantine first developed the DFD as a way of expressing system
requirements in a graphical from, this lead to the modular design.
A DFD is also known as a “bubble Chart” has the purpose of clarifying system
requirements and identifying major transformations that will become programs in
system design. So it is the starting point of the design to the lowest level of detail.
A DFD consists of a series of bubbles joined by data flows in the system.

DFD SYMBOLS:
In the DFD, there are four symbols
1. A square defines a source(originator) or destination of system data
2. An arrow identifies data flow. It is the pipeline through which the information
flows
3. A circle or a bubble represents a process that transforms incoming data flow
into outgoing data flows.
4. An open rectangle is a data store, data at rest or a temporary repository of data

Process that transforms data flow.


Source or Destination of data

Data flow

Data Store

CONSTRUCTING A DFD:
Several rules of thumb are used in drawing DFD’S:

1. Process should be named and numbered for an easy reference. Each name
should be representative of the process.
2. The direction of flow is from top to bottom and from left to right. Data
traditionally flow from source to the destination although they may flow back to
the source. One way to indicate this is to draw long flow line back to a source.
An alternative way is to repeat the source symbol as a destination. Since it is
used more than once in the DFD it is marked with a short diagonal.
3. When a process is exploded into lower level details, they are numbered.
4. The names of data stores and destinations are written in capital letters. Process
and dataflow names have the first letter of each work capitalized

A DFD typically shows the minimum contents of data store. Each data store
should contain all the data elements that flow in and out.
Questionnaires should contain all the data elements that flow in and out.
Missing interfaces redundancies and like is then accounted for often through
interviews.

SAILENT FEATURES OF DFD’S


1. The DFD shows flow of data, not of control loops and decision are controlled
considerations do not appear on a DFD.
2. The DFD does not indicate the time factor involved in any process whether
the dataflow take place daily, weekly, monthly or yearly.
3. The sequence of events is not brought out on the DFD.

TYPES OF DATA FLOW DIAGRAMS

1. Current Physical
2. Current Logical
3. New Logical
4. New Physical

CURRENT PHYSICAL:
In Current Physical DFD process label include the name of people or their
positions or the names of computer systems that might provide some of the
overall system-processing label includes an identification of the technology used to
process the data. Similarly data flows and data stores are often labels with the
names of the actual physical media on which data are stored such as file folders,
computer files, business forms or computer tapes.

CURRENT LOGICAL:
The physical aspects at the system are removed as mush as possible so that
the current system is reduced to its essence to the data and the processors that
transforms them regardless of actual physical form.

NEW LOGICAL:
This is exactly like a current logical model if the user were completely happy
with he user were completely happy with the functionality of the current system
but had problems with how it was implemented typically through the new logical
model will differ from current logical model while having additional functions,
absolute function removal and inefficient flows recognized.

NEW PHYSICAL:
The new physical represents only the physical implementation of the new
system.
RULES GOVERNING THE DFD’S

PROCESS

1) No process can have only outputs.


2) No process can have only inputs. If an object has only inputs than it must
be a sink.
3) A process has a verb phrase label.

DATA STORE
1) Data cannot move directly from one data store to another data store, a
process must move data.
2) Data cannot move directly from an outside source to a data store, a
process, which receives, must move data from the source and place the data
into data store
3) A data store has a noun phrase label.

SOURCE OR SINK
The origin and / or destination of data.

1) Data cannot move direly from a source to sink it must be moved by a


process
2) A source and /or sink has a noun phrase land

DATA FLOW

1) A Data Flow has only one direction of flow between symbols. It may flow
in both directions between a process and a data store to show a read before an
update. The later is usually indicated however by two separate arrows since
these happen at different type.
2) A join in DFD means that exactly the same data comes from any of two or
more different processes data store or sink to a common location.
3) A data flow cannot go directly back to the same process it leads. There
must be atleast one other process that handles the data flow produce some
other data flow returns the original data into the beginning process.
4) A Data flow to a data store means update (delete or change).
5) A data Flow from a data store means retrieve or use.

A data flow has a noun phrase label more than one data flow noun phrase can
appear on a single arrow as long as all of the flows on the same arrow move
together as one package.

PASTE YOUR DFD’S HERE

5.3. UML DIAGRAMS


Paste UML Diagrams here
Paste YOUR Screens Here
7.1 INTRODUCTION TO TESTING
Software Testing is the process used to help identify the correctness,
completeness, security, and quality of developed computer software. Testing is a
process of technical investigation, performed on behalf of stakeholders, that is
intended to reveal quality-related information about the product with respect to
the context in which it is intended to operate. This includes, but is not limited to,
the process of executing a program or application with the intent of finding errors.
Quality is not an absolute; it is value to some person. With that in mind, testing can
never completely establish the correctness of arbitrary computer software; testing
furnishes a criticism or comparison that compares the state and behavior of the
product against a specification. An important point is that software testing should
be distinguished from the separate discipline of Software Quality Assurance (SQA),
which encompasses all business process areas, not just testing.
There are many approaches to software testing, but effective testing of
complex products is essentially a process of investigation, not merely a matter of
creating and following routine procedure. One definition of testing is "the process
of questioning a product in order to evaluate it", where the "questions" are
operations the tester attempts to execute with the product, and the product
answers with its behavior in reaction to the probing of the tester[citation needed].
Although most of the intellectual processes of testing are nearly identical to that of
review or inspection, the word testing is connoted to mean the dynamic analysis of
the product—putting the product through its paces. Some of the common quality
attributes include capability, reliability, efficiency, portability, maintainability,
compatibility and usability. A good test is sometimes described as one which
reveals an error; however, more recent thinking suggests that a good test is one
which reveals information of interest to someone who matters within the project
community.

Introduction
In general, software engineers distinguish software faults from software failures. In
case of a failure, the software does not do what the user expects. A fault is a
programming error that may or may not actually manifest as a failure. A fault can
also be described as an error in the correctness of the semantic of a computer
program. A fault will become a failure if the exact computation conditions are met,
one of them being that the faulty portion of computer software executes on the
CPU. A fault can also turn into a failure when the software is ported to a different
hardware platform or a different compiler, or when the software gets extended.
Software testing is the technical investigation of the product under test to provide
stakeholders with quality related information.

Software testing may be viewed as a sub-field of Software Quality Assurance


but typically exists independently (and there may be no SQA areas in some
companies). In SQA, software process specialists and auditors take a broader view
on software and its development. They examine and change the software
engineering process itself to reduce the amount of faults that end up in the code or
deliver faster.
Regardless of the methods used or level of formality involved the desired result of
testing is a level of confidence in the software so that the organization is confident
that the software has an acceptable defect rate. What constitutes an acceptable
defect rate depends on the nature of the software. An arcade video game designed
to simulate flying an airplane would presumably have a much higher tolerance for
defects than software used to control an actual airliner.
A problem with software testing is that the number of defects in a software product
can be very large, and the number of configurations of the product larger still.
Bugs that occur infrequently are difficult to find in testing. A rule of thumb is that a
system that is expected to function without faults for a certain length of time must
have already been tested for at least that length of time. This has severe
consequences for projects to write long-lived reliable software.
A common practice of software testing is that it is performed by an
independent group of testers after the functionality is developed but before it is
shipped to the customer. This practice often results in the testing phase being
used as project buffer to compensate for project delays. Another practice is to start
software testing at the same moment the project starts and it is a continuous
process until the project finishes.
Another common practice is for test suites to be developed during technical
support escalation procedures. Such tests are then maintained in regression
testing suites to ensure that future updates to the software don't repeat any of the
known mistakes.
It is commonly believed that the earlier a defect is found the cheaper it is to
fix it.
Time Detected
Time Requiremen Architectur Constructio System Post-
Introduced ts e n Test Release
Requirements 1 3 5-10 10 10-100
Architecture - 1 10 15 25-100
Construction - - 1 10 10-25
In counterpoint, some emerging software disciplines such as extreme
programming and the agile software development movement, adhere to a "test-
driven software development" model. In this process unit tests are written first, by
the programmers (often with pair programming in the extreme programming
methodology). Of course these tests fail initially; as they are expected to. Then as
code is written it passes incrementally larger portions of the test suites. The test
suites are continuously updated as new failure conditions and corner cases are
discovered, and they are integrated with any regression tests that are developed.
Unit tests are maintained along with the rest of the software source code and
generally integrated into the build process (with inherently interactive tests being
relegated to a partially manual build acceptance process).
The software, tools, samples of data input and output, and configurations are
all referred to collectively as a test harness.
History
The separation of debugging from testing was initially introduced by Glenford
J. Myers in his 1978 book the "Art of Software Testing". Although his attention was
on breakage testing it illustrated the desire of the software engineering community
to separate fundamental development activities, such as debugging, from that of
verification. Drs. Dave Gelperin and William C. Hetzel classified in 1988 the phases
and goals in software testing as follows: until 1956 it was the debugging oriented
period, where testing was often associated to debugging: there was no clear
difference between testing and debugging. From 1957-1978 there was the
demonstration oriented period where debugging and testing was distinguished
now - in this period it was shown, that software satisfies the requirements. The
time between 1979-1982 is announced as the destruction oriented period, where
the goal was to find errors. 1983-1987 is classified as the evaluation oriented
period: intention here is that during the software lifecycle a product evaluation is
provided and measuring quality. From 1988 on it was seen as prevention oriented
period where tests were to demonstrate that software satisfies its specification, to
detect faults and to prevent faults. Dr. Gelperin chaired the IEEE 829-1988 (Test
Documentation Standard) with Dr. Hetzel writing the book "The Complete Guide of
Software Testing". Both works were pivotal in to today's testing culture and remain
a consistent source of reference. Dr. Gelperin and Jerry E. Durant also went on to
develop High Impact Inspection Technology that builds upon traditional Inspections
but utilizes a test driven additive.
White-box and black-box testing
To meet Wikipedia's quality standards, this section may require cleanup.
Please discuss this issue on the talk page, and/or replace this tag with a more
specific message.

White box and black box testing are terms used to describe the point of view a test
engineer takes when designing test cases. Black box being an external view of the
test object and white box being an internal view. Software testing is partly
intuitive, but largely systematic. Good testing involves much more than just
running the program a few times to see whether it works. Thorough analysis of the
program under test, backed by a broad knowledge of testing techniques and tools
are prerequisites to systematic testing. Software Testing is the process of
executing software in a controlled manner; in order to answer the question “Does
this software behave as specified?” Software testing is used in association with
Verification and Validation. Verification is the checking of or testing of items,
including software, for conformance and consistency with an associated
specification. Software testing is just one kind of verification, which also uses
techniques as reviews, inspections, walk-through. Validation is the process of
checking what has been specified is what the user actually wanted.
• Validation: Are we doing the right job?
• Verification: Are we doing the job right?
In order to achieve consistency in the Testing style, it is imperative to have
and follow a set of testing principles. This enhances the efficiency of testing within
SQA team members and thus contributes to increased productivity. The purpose of
this document is to provide overview of the testing, plus the techniques.
At SDEI, 3 levels of software testing is done at various SDLC phases
• Unit Testing: in which each unit (basic component) of the software is tested
to verify that the detailed design for the unit has been correctly implemented
• Integration testing: in which progressively larger groups of tested software
components corresponding to elements of the architectural design are
integrated and tested until the software works as a whole.
• System testing: in which the software is integrated to the overall product and
tested to show that all requirements are met
A further level of testing is also done, in accordance with requirements:
• Acceptance testing: upon which the acceptance of the complete software is
based. The clients often do this.
• Regression testing: is used to refer the repetition of the earlier successful
tests to ensure that changes made in the software have not introduced new
bugs/side effects.
In recent years the term grey box testing has come into common usage. The
typical grey box tester is permitted to set up or manipulate the testing
environment, like seeding a database, and can view the state of the product after
his actions, like performing a SQL query on the database to be certain of the
values of columns. It is used almost exclusively of client-server testers or others
who use a database as a repository of information, but can also apply to a tester
who has to manipulate XML files (DTD or an actual XML file) or configuration files
directly. It can also be used of testers who know the internal workings or algorithm
of the software under test and can write tests specifically for the anticipated
results. For example, testing a data warehouse implementation involves loading
the target database with information, and verifying the correctness of data
population and loading of data into the correct tables.

Test levels
• Unit testing tests the minimal software component and sub-component or
modules by the programmers.
• Integration testing exposes defects in the interfaces and interaction between
integrated components (modules).
• Functional testing tests the product according to programmable work.
• System testing tests an integrated system to verify/validate that it meets its
requirements.
• Acceptance testing testing can be conducted by the client. It allows the end-
user or customer or client to decide whether or not to accept the product.
Acceptance testing may be performed after the testing and before the
implementation phase. See also Development stage
o Alpha testing is simulated or actual operational testing by potential
users/customers or an independent test team at the developers' site.
Alpha testing is often employed for off-the-shelf software as a form of
internal acceptance testing, before the software goes to beta testing.
o Beta testing comes after alpha testing. Versions of the software, known
as beta versions, are released to a limited audience outside of the
company. The software is released to groups of people so that further
testing can ensure the product has few faults or bugs. Sometimes,
beta versions are made available to the open public to increase the
feedback field to a maximal number of future users.
It should be noted that although both Alpha and Beta are referred to as
testing it is in fact use emersion. The rigors that are applied are often
unsystematic and many of the basic tenets of testing process are not used. The
Alpha and Beta period provides insight into environmental and utilization
conditions that can impact the software.
After modifying software, either for a change in functionality or to fix defects,
a regression test re-runs previously passing tests on the modified software to
ensure that the modifications haven't unintentionally caused a regression of
previous functionality. Regression testing can be performed at any or all of the
above test levels. These regression tests are often automated.
Test cases, suites, scripts and scenarios
A test case is a software testing document, which consists of event, action,
input, output, expected result and actual result. Clinically defined (IEEE 829-1998)
a test case is an input and an expected result. This can be as pragmatic as 'for
condition x your derived result is y', whereas other test cases described in more
detail the input scenario and what results might be expected. It can occasionally
be a series of steps (but often steps are contained in a separate test procedure
that can be exercised against multiple test cases, as a matter of economy) but
with one expected result or expected outcome. The optional fields are a test case
ID, test step or order of execution number, related requirement(s), depth, test
category, author, and check boxes for whether the test is automatable and has
been automated. Larger test cases may also contain prerequisite states or steps,
and descriptions. A test case should also contain a place for the actual result.
These steps can be stored in a word processor document, spreadsheet, database
or other common repository. In a database system, you may also be able to see
past test results and who generated the results and the system configuration used
to generate those results. These past results would usually be stored in a separate
table.
The term test script is the combination of a test case, test procedure and test data.
Initially the term was derived from the byproduct of work created by automated
regression test tools. Today, test scripts can be manual, automated or a
combination of both.
The most common term for a collection of test cases is a test suite. The test
suite often also contains more detailed instructions or goals for each collection of
test cases. It definitely contains a section where the tester identifies the system
configuration used during testing. A group of test cases may also contain
prerequisite states or steps, and descriptions of the following tests.
Collections of test cases are sometimes incorrectly termed a test plan. They might
correctly be called a test specification. If sequence is specified, it can be called a
test script, scenario or procedure.
A sample testing cycle
Although testing varies between organizations, there is a cycle to testing:
1. Requirements Analysis: Testing should begin in the requirements phase of
the software development life cycle.
During the design phase, testers work with developers in determining what
aspects of a design are testable and under what parameter those tests work.
2. Test Planning: Test Strategy, Test Plan(s), Test Bed creation.
3. Test Development: Test Procedures, Test Scenarios, Test Cases, Test Scripts
to use in testing software.
4. Test Execution: Testers execute the software based on the plans and tests
and report any errors found to the development team.
5. Test Reporting: Once testing is completed, testers generate metrics and
make final reports on their test effort and whether or not the software tested
is ready for release.
6. Retesting the Defects
Not all errors or defects reported must be fixed by a software development team.
Some may be caused by errors in configuring the test software to match the
development or production environment. Some defects can be handled by a
workaround in the production environment. Others might be deferred to future
releases of the software, or the deficiency might be accepted by the business user.
There are yet other defects that may be rejected by the development team (of
course, with due reason) if they deem it inappropriate to be called a defect.

7.3 IMPLEMENTATION
Implementation is the process of converting a new or revised system design
into operational one. There are three types of Implementation:
 Implementation of a computer system to replace a manual system. The
problems encountered are converting files, training users, and verifying
printouts for integrity.
 Implementation of a new computer system to replace an existing one. This
is usually a difficult conversion. If not properly planned there can be many
problems.
 Implementation of a modified application to replace an existing one using
the same computer. This type of conversion is relatively easy to handle,
provided there are no major changes in the files.
Implementation in Generic tool project is done in all modules. In the
first module User level identification is done. In this module every user is
identified whether they are genuine one or not to access the database and
also generates the session for the user. Illegal use of any form is strictly
avoided.
In the Table creation module, the tables are created with user specified
fields and user can create many table at a time. They may specify
conditions, constraints and calculations in creation of tables. The Generic
code maintain the user requirements through out the project.
In Updating module user can update or delete or Insert the new record
into the database. This is very important module in Generic code project.
User has to specify the filed value in the form then the Generic tool
automatically gives whole filed values for that particular record.
In Reporting module user can get the reports from the database in
2Dimentional or 3Dimensional view. User has to select the table and specify
the condition then the report will be generated for the user.
9.1. INTRODUCTION
The protection of computer based resources that includes hardware,
software, data, procedures and people against unauthorized use or natural

Disaster is known as System Security.

System Security can be divided into four related issues:


• Security
• Integrity
• Privacy
• Confidentiality

SYSTEM SECURITY refers to the technical innovations and procedures applied to


the hardware and operation systems to protect against deliberate or accidental
damage from a defined threat.

DATA SECURITY is the protection of data from loss, disclosure, modification and
destruction.

SYSTEM INTEGRITY refers to the power functioning of hardware and programs,


appropriate physical security and safety against external threats such as
eavesdropping and wiretapping.

PRIVACY defines the rights of the user or organizations to determine what


information they are willing to share with or accept from others and how the
organization can be protected against unwelcome, unfair or excessive
dissemination of information about it.

CONFIDENTIALITY is a special status given to sensitive information in a database


to minimize the possible invasion of privacy. It is an attribute of information that
characterizes its need for protection.
9.2. SECURITY IN SOFTWARE
System security refers to various validations on data in form of checks and
controls to avoid the system from failing. It is always important to ensure that only
valid data is entered and only valid operations are performed on the system. The
system employees two types of checks and controls:

CLIENT SIDE VALIDATION

Various client side validations are used to ensure on the client side that only
valid data is entered. Client side validation saves server time and load to handle
invalid data. Some checks imposed are:

• VBScript in used to ensure those required fields are filled with suitable data
only. Maximum lengths of the fields of the forms are appropriately defined.
• Forms cannot be submitted without filling up the mandatory data so that
manual mistakes of submitting empty fields that are mandatory can be sorted
out at the client side to save the server time and load.
• Tab-indexes are set according to the need and taking into account the ease of
user while working with the system.

SERVER SIDE VALIDATION


Some checks cannot be applied at client side. Server side checks are necessary to
save the system from failing and intimating the user that some invalid operation
has been performed or the performed operation is restricted. Some of the server
side checks imposed is:

• Server side constraint has been imposed to check for the validity of primary key
and foreign key. A primary key value cannot be duplicated. Any attempt to
duplicate the primary value results into a message intimating the user about
those values through the forms using foreign key can be updated only of the
existing foreign key values.
• User is intimating through appropriate messages about the successful
operations or exceptions occurring at server side.
• Various Access Control Mechanisms have been built so that one user may not
agitate upon another. Access permissions to various types of users are
controlled according to the organizational structure. Only permitted users can
log on to the system and can have access according to their category. User-
name, passwords and permissions are controlled o the server side.
• Using server side validation, constraints on several restricted operations are
imposed.
It has been a great pleasure for me to work on this exciting and challenging
project. This project proved good for me as it provided practical knowledge of not
only programming in VB.NET windows based application, but also about all
handling procedure related with “Greyhound Fleet Manager”. It also provides
knowledge about the latest technology used in developing client server technology
that will be great demand in future. This will provide better opportunities and
guidance in future in developing projects independently.

BENEFITS:

The project is identified by the merits of the system offered to the user. The merits
of this project are as follows: -

• This project offers user to enter the data through simple and interactive forms.
This is very helpful for the client to enter the desired information through so
much simplicity.
• The user is mainly more concerned about the validity of the data, whatever he
is entering. There are checks on every stages of any new creation, data entry or
updation so that the user cannot enter the invalid data, which can create
problems at later date.
• Sometimes the user finds in the later stages of using project that he needs to
update some of the information that he entered earlier. There are options for
him by which he can update the records. Moreover there is restriction for his
that he cannot change the primary data field. This keeps the validity of the data
to longer extent.
• User is provided the option of monitoring the records he entered earlier. He can
see the desired records with the variety of options provided by him.
• From every part of the project the user is provided with the links through
framing so that he can go from one option of the project to other as per the
requirement. This is bound to be simple and very friendly as per the user is
concerned. That is, we can sat that the project is user friendly which is one of
the primary concerns of any good project.
• Data storage and retrieval will become faster and easier to maintain because
data is stored in a systematic manner and in a single database.
• Decision making process would be greatly enhanced because of faster
processing of information since data collection from information available on
computer takes much less time then manual system.
• Allocating of sample results becomes much faster because at a time the user
can see the records of last years.
• Easier and faster data transfer through latest technology associated with the
computer and communication.
• Through these features it will increase the efficiency, accuracy and
transparency,

LIMITATIONS:

• Training for simple computer operations is necessary for the users working on
the system.
• Online payments can be included in future.
• Managing prospective dealers and to avail discounts and offers to fetch more
profits. Managing these discounts and offers can be incorporated in future.
• Integrating GPS with the application which tracks the current vehicle
information which includes its location, speed etc., which can be very much
useful in certain instances when the vehicle is missing for some reason.
• Incorporating auto alert system to send vehicle for servicing.
• FOR .NET INSTALLATION
www.support.mircosoft.com
• FOR DEPLOYMENT AND PACKING ON SERVER
www.developer.com
www.15seconds.com

• System Analysis and Design

Senn

• Software Engineering Concepts

Robert Pressman

Das könnte Ihnen auch gefallen