Beruflich Dokumente
Kultur Dokumente
on
XXXXXXXXX
( Regd.No: XXXXXXXX )
Mr. XXXXXXXX
(Project Coordinator, XXXXXXXXXXX)
XXXXXXX UNIVERSITY
CERTIFICATE
XXXXXXX
ABSTRACT
keeps track the maintenance performed for different vehicles which are used
for transportation.
The super users of the system are the ‘ADMIN’ and the ‘MANAGERS’ of the
different departments allocated by the admin. The admin may be the owner of
If any other vehicle is added to the fleet which already exists for the
newly appointed or the existing employee is taken off both the details are
be performed can also be scheduled for each type of vehicles. The details of
the parts/inventory used for the vehicles are maintained. The reorder level
and the reorder quantity are predefined for each particular type of part.
The Vendors or suppliers of the vehicles, parts and performs maintenance
required for the vehicles. The particulars of the various vendors are
3. Existing System:
4. Proposed System:
In order to avoid the limitations in the existing system, the current system
is being developed.
All vehicle details will be automated along with the staff information.
Scheduling of trips and repair information is being fully automated to
overcome chaos in the system.
5. Features:
The Functionalities provided by the Project are as follows.
• Vehicle
• Employee information
• Reports
• Parts
• Location
• Repairs
• Vender
• Maintenance
6. Modules:
The application comprises the following major modules.
• User Authentication
• Vehicle
• Inventory
• Employee information
• Maintenance
7. Requirements:
• Hardware requirements:
Content Description
HDD 20 GB Min
40 GB Recommended
RAM 1 GB Min
2 GB Recommended
• Software requirements:
Content Description
OS Windows XP with SP2 or Windows
Vista
Database MS-SQL server 2005
Technolo ASP.NET with VB.NET
gies
IDE Ms-Visual Studio .Net 2008
Browser IE
Paste Organization profile here
CONTENTS
1. INTRODUCTION
2. SYSTEM ANALYSIS
3. FEASIBILITY STUDY
5. SYSTEM DESIGN
5.1 INTRODUCTION
5.2 DATA FLOW DIAGRAMS
5.3 UML DIAGRAMS
6. OUTPUT SCREENS
7. SYSTEM TESTING AND IMPLEMENTATION
8. SYSTEM SECURITY
8.1 INTRODUCTION
8.2 SECURITY IN SOFTWARE
9. CONCLUSION
11. BIBLIOGRAPHY
1.1. INTRODUCTION & OBJECTIVE
The ‘Greyhound Fleet Manager’ keeps track the information about the
keeps track the maintenance performed for different vehicles which are used
for transportation.
The super users of the system are the ‘ADMIN’ and the ‘MANAGERS’ of the
different departments allocated by the admin. The admin may be the owner of
If any other vehicle is added to the fleet which already exists for the
newly appointed or the existing employee is taken off both the details are
be performed can also be scheduled for each type of vehicles. The details of
the parts/inventory used for the vehicles are maintained. The reorder level
and the reorder quantity are predefined for each particular type of part.
required for the vehicles. The particulars of the various vendors are
• Managing huge Fleet information manually is a tedious and error prone task.
• In order to schedule vehicles as well as staff, we the scheduler should not
how many vehicles are there on board and available for allocation.
• Keeping track of repair information is a must as some times vehicles might
be referred for insurance.
These entire things can not be achieved in existing system.
Current system is differentiated into the following modules which are closely
integrated with one another.
User Authentication
Vehicle
Employee Information
Maintenance
Inventory
Vehicle Module:
• Adding vehicles and/or equipment is a simple process and doesn’t require a
wealth of information.
• As long as we have the year, make, model, current Mi/Km/Hr and the base
information, that is all you need.
• We can add a vehicle with the most basic information.
Employee Information:
• The application will keep track of many details include Employee number,
Name, personal information, License Information.
• It also allows you to edit your Employee Information. Under this module
there will be facility to add, delete, Modify the information regarding
Employee.
Maintenance Module:
Inventory Module:
User Authentication:
• This Modules involves Administrator operations, involves with vehicles
registration, create users (employees), and authenticating the Employees.
Administrator maintains the entire project.
OUTPUT DESIGN
Outputs from computer systems are required primarily to communicate the results
of processing to users. They are also used to provide a permanent copy of the
results for later consultation. The various types of outputs in general are:
• External Outputs, whose destination is outside the organization.
• Internal Outputs whose destination is with in organization and they are the
User’s main interface with the computer.
• Operational outputs whose use is purely with in the computer department.
• Interface outputs, which involve the user in communicating directly with
SDLC MODEL:
Waterfall Model
Software products are oriented towards customers like any other
engineering products. It is either driver by market or it drives the market.
Customer Satisfaction was the main aim in the 1980's. Customer Delight is today's
logo and Customer Ecstasy is the new buzzword of the new millennium. Products
which are not customer oriented have no place in the market although they are
designed using the best technology. The front end of the product is as crucial as
the internal technology of the product.
A market study is necessary to identify a potential customer's need. This
process is also called as market research. The already existing need and the
possible future needs that are combined together for study. A lot of assumptions
are made during market study. Assumptions are the very important factors in the
development or start of a product's development. The assumptions which are not
realistic can cause a nosedive in the entire venture. Although assumptions are
conceptual, there should be a move to develop tangible assumptions to move
towards a successful product.
Once the Market study is done, the customer's need is given to the Research
and Development Department to develop a cost-effective system that could
potentially solve customer's needs better than the competitors. Once the system is
developed and tested in a hypothetical environment, the development team takes
control of it. The development team adopts one of the software development
models to develop the proposed system and gives it to the customers.
The basic popular models used by many software development firms are as
follows:
A) System Development Life Cycle (SDLC) Model
B) Prototyping Model
C) Rapid Application Development Model
D) Component Assembly Model
A) System Development Life Cycle Model (SDLC Model):
This is also called as Classic Life Cycle Model (or) Linear Sequential Model (or)
Waterfall Method. This model has the following activities.
4) Code Generation
In Code Generation phase, the design must be decoded into a machine-
readable form. If the design of software product is done in a detailed manner, code
generation can be achieved without much complication. For generation of code,
Programming tools like Compilers, Interpreters, and Debuggers are used. For
coding purpose different high level programming languages like C, C++, Pascal
and Java are used. The right programming language is chosen according to the
type of application.
5)Testing
After code generation phase the software program testing begins. Different
testing methods are available to detect the bugs that were committed during the
previous phases. A number of testing tools and methods are already available for
testing purpose.
6) Maintenance
Software will definitely go through change once when it is delivered to the
customer. There are large numbers of reasons for the change. Change could
happen due to some unpredicted input values into the system. In addition to this
the changes in the system directly have an effect on the software operations. The
software should be implemented to accommodate changes that could be happen
during the post development period.
Features of OOAD:
• It users Objects as building blocks of the application rather functions
• All objects can be represented graphically including the relation between
them.
• All Key Participants in the system will be represented as actors and the
actions done by them will be represented as use cases.
• A typical use case is nothing bug a systematic flow of series of events
which can be well described using sequence diagrams and each event can
be described diagrammatically by Activity as well as state chart diagrams.
• So the entire system can be well described using OOAD model, hence this
model is chosen as SDLC model.
Three-Tier Architecture
• Tier 1: the client contains the presentation logic, including simple control and
user input validation. This application is also known as a thin client. The
client interface is developed using ASP.Net Server Controls and HTML
controls in some occasions
• Tier 2: the middle tier is also known as the application server, which provides
the business processes logic and the data access. The business logic/
business rules can be written either with C#.Net or VB.Net languages. These
business runes will be deployed as DLL’s in IIS web server.
• Tier 3: the data server provides the business data. MS-SQL server acts as
Tier-3, which is the database layer.
The proposed system can be designed perfectly with the three tier model, as all
layers are perfectly getting set as part of the project. In the future, while
expanding the system, in order to implement integration touch points and to
provide enhanced user interfaces, the n-tier architecture can be used.
3.1 TECHINICAL FEASIBILITY:
Evaluating the technical feasibility is the trickiest part of a feasibility study.
This is because, at this point in time, not too many detailed design of the system,
making it difficult to access issues like performance, costs on (on account of the
kind of technology to be deployed) etc.
A number of issues have to be considered while doing a technical analysis.
Understand the different technologies involved in the proposed system:
Before commencing the project, we have to be very clear about what are the
technologies that are to be required for the development of the new system.
Find out whether the organization currently possesses the required technologies:
Is the required technology available with the organization?
If so is the capacity sufficient?
For instance –
“Will the current printer be able to handle the new reports and forms
required for the new system?”
Purpose: The main purpose for preparing this document is to give a general insight into
the analysis and requirements of the existing system or situation and for determining the
operating characteristics of the system.
Scope: This Document plays a vital role in the development life cycle (SDLC) and it
describes the complete requirement of the system. It is meant for use by the developers
and will be the basic during testing phase. Any changes made to the requirements in the
future will have to go through formal change approval process.
• Developing the system, which meets the SRS and solving all the requirements of
the system?
• Demonstrating the system and installing the system at client's location after the
acceptance testing is successful.
• Submitting the required user manual describing the system interfaces to work on it
and also the documents of the system.
• Conducting any user training that might be needed for using the system.
• Maintaining the system for a period of one year after installation.
The requirement specification for any system can be broadly stated as given below:
• The system should be able to interface with the existing system
• The system should be accurate
• The system should be better than the existing system
The existing system is completely dependent on the user to perform all the duties.
The following illustration shows the relationship of the common language runtime
and the class library to your applications and to the overall system. The illustration also
shows how managed code operates within a larger architecture.
The .NET Framework class library is a collection of reusable types that tightly
integrate with the common language runtime. The class library is object oriented,
providing types from which your own managed code can derive functionality. This not only
makes the .NET Framework types easy to use, but also reduces the time associated with
learning new features of the .NET Framework. In addition, third-party components can
integrate seamlessly with classes in the .NET Framework.
For example, the .NET Framework collection classes implement a set of interfaces
that you can use to develop your own collection classes. Your collection classes will blend
seamlessly with the classes in the .NET Framework.
As you would expect from an object-oriented class library, the .NET Framework
types enable you to accomplish a range of common programming tasks, including tasks
such as string management, data collection, database connectivity, and file access. In
addition to these common tasks, the class library includes types that support a variety of
specialized development scenarios. For example, you can use the .NET Framework to
develop the following types of applications and services:
• Console applications.
• Scripted or hosted applications.
• Windows GUI applications (Windows Forms).
• ASP.NET applications.
• XML Web services.
• Windows services.
For example, the Windows Forms classes are a comprehensive set of reusable types
that vastly simplify Windows GUI development. If you write an ASP.NET Web Form
application, you can use the Web Forms classes.
Declarations
A Visual Basic program is made up of named entities. These entities are
introduced through declarations and represent the "meaning" of the program.
At a top level, namespaces are entities that organize other entities, such as
nested namespaces and types. Types are entities that describe values and define
executable code. Types may contain nested types and type members. Type
members are constants, variables, methods, operators, properties, events,
enumeration values, and constructors.
An entity that can contain other entities defines a declaration space. Entities are
introduced into a declaration space either through declarations or inheritance; the
containing declaration space is called the entities' declaration context. Declaring
an entity in a declaration space in turn defines a new declaration space that can
contain further nested entity declarations; thus, the declarations in a program form
a hierarchy of declaration spaces.
Except in the case of overloaded type members, it is invalid for declarations to
introduce identically named entities of the same kind into the same declaration
context. Additionally, a declaration space may never contain different kinds of
entities with the same name; for example, a declaration space may never contain
a variable and a method by the same name.
Annotation
It may be possible in other languages to create a declaration space that
contains different kinds of entities with the same name (for example, if the
language is case sensitive and allows different declarations based on casing). In
that situation, the most accessible entity is considered bound to that name; if more
than one type of entity is most accessible then the name is ambiguous. Public is
more accessible than Protected Friend, Protected Friend is more accessible than
Protected or Friend, and Protected or Friend is more accessible than Private.
Namespace Data
Class Order
End Class
End Namespace
Because the two declarations contribute to the same declaration space, a compile-
time error would occur if each contained a declaration of a class with the same
name.
Overloading and Signatures
The only way to declare identically named entities of the same kind in a
declaration space is through overloading. Only methods, operators, instance
constructors, and properties may be overloaded.
Overloaded type members must possess unique signatures. The signature of a
type member consists of the name of the type member, the number of type
parameters, and the number and types of the member's parameters. Conversion
operators also include the return type of the operator in the signature.
The following are not part of a member's signature, and hence cannot be
overloaded on:
• Modifiers to a type member (for example, Shared or Private).
• Modifiers to a parameter (for example, ByVal or ByRef).
• The names of the parameters.
• The return type of a method or operator (except for conversion operators) or
the element type of a property.
• Constraints on a type parameter.
The following example shows a set of overloaded method declarations along
with their signatures. This declaration would not be valid since several of the
method declarations have identical signatures.
Interface ITest
Sub F1() ' Signature is F1().
Sub F2(x As Integer) ' Signature is F2(Integer).
Sub F3(ByRef x As Integer) ' Signature is F3(Integer).
Sub F4(x As Integer, y As Integer) ' Signature is
F4(Integer, Integer).
Function F5(s As String) As Integer ' Signature is F5(String).
Function F6(x As Integer) As Integer ' Signature is F6(Integer).
Sub F7(a() As String) ' Signature is F7(String()).
Sub F8(ParamArray a() As String) ' Signature is F8(String()).
Sub F9(Of T)() ' Signature is F9!1().
Sub F10(Of T, U)(x As T, y As U) ' Signature is F10!2(!1, !2)
Sub F11(Of U, T)(x As T, y As U) ' Signature is F11!2(!2, !1)
Sub F12(Of T)(x As T) ' Signature is F12!1(!1)
Sub F13(Of T As IDisposable)(x As T) ' Signature is F13!1(!1)
End Interface
A method with optional parameters is considered to have multiple signatures,
one for each set of parameters that can be passed in by the caller. For example,
the following method has three corresponding signatures:
Sub F(x As Short, _
Optional y As Integer = 10, _
Optional z As Long = 20)
These are the method's signatures:
• F(Short)
• F(Short, Integer)
• F(Short, Integer, Long)
It is valid to define a generic type that may contain members with identical
signatures based on the type arguments supplied. Overload resolution rules are
used to try and disambiguate between such overloads, although there may be
situations in which it is impossible to disambiguate. For example:
Class C(Of T)
Sub F(x As Integer)
End Sub
Sub F(x As T)
End Sub
Module Test
Sub Main()
Dim x As New C(Of Integer)
x.F(10) ' Calls C(Of T).F(Integer)
x.G(Of Integer)(10, 10) ' Error: Can't choose between overloads
End Sub
End Module
Shadowing
A derived type shadows the name of an inherited type member by re-
declaring it. Shadowing a name does not remove the inherited type members with
that name; it merely makes all of the inherited type members with that name
unavailable in the derived class. The shadowing declaration may be any type of
entity.
Entities than can be overloaded can choose one of two forms of shadowing.
shadows by name hides everything by that name in the base class, including all
keyword. An entity that shadows by name and signature hides everything by that
ADO.NET OVERVIEW
ADO.NET uses some ADO objects, such as the Connection and Command
objects, and also introduces new objects. Key new ADO.NET objects include the
DataSet, DataReader, and DataAdapter.
Connections:
Connections are used to 'talk to' databases, and are represented by
provider-specific classes such as SqlConnection. Commands travel over
connections and resultsets are returned in the form of streams which can be read
by a DataReader object, or pushed into a DataSet object.
Commands:
Commands contain the information that is submitted to a database, and are
represented by provider-specific classes such as SqlCommand. A command can
be a stored procedure call, an UPDATE statement, or a statement that returns
results. You can also use input and output parameters, and return values as part of
your command syntax. The example below shows how to issue an INSERT
statement against the Northwind database.
DataReaders:
The DataReader object is somewhat synonymous with a read-only/forward-only
cursor over data. The DataReader API supports flat as well as hierarchical data. A
DataReader object is returned after executing a command against a database.
The format of the returned DataReader object is different from a recordset. For
example, you might use the DataReader to show the results of a search list in a
web page.
DATAADAPTERS (OLEDB/SQL)
The DataAdapter object works as a bridge between the DataSet and the
source data. Using the provider-specific SqlDataAdapter (along with its
associated SqlCommand and SqlConnection) can increase overall performance
when working with a Microsoft SQL Server databases. For other OLE DB-supported
databases, you would use the OleDbDataAdapter object and its associated
OleDbCommand and OleDbConnection objects.
The DataAdapter object uses commands to update the data source after
changes have been made to the DataSet. Using the Fill method of the
DataAdapter calls the SELECT command; using the Update method calls the
INSERT, UPDATE or DELETE command for each changed row. You can explicitly set
these commands in order to control the statements used at runtime to resolve
changes, including the use of stored procedures. For ad-hoc scenarios, a
CommandBuilder object can generate these at run-time based upon a select
statement. However, this run-time generation requires an extra round-trip to the
server in order to gather required metadata, so explicitly providing the INSERT,
UPDATE, and DELETE commands at design time will result in better run-time
performance.
1. ADO.NET is the next evolution of ADO for the .Net Framework.
2. ADO.NET was created with n-Tier, statelessness and XML in the forefront.
Two new objects, the DataSet and DataAdapter, are provided for these
scenarios.
3. ADO.NET can be used to get data from a stream, or to store data in a cache
for updates.
4. There is a lot more information about ADO.NET in the documentation.
5. Remember, you can execute a command directly against the database in
order to do inserts, updates, and deletes. You don't need to first put data into a
DataSet in order to insert, update, or delete it.
6. Also, you can use a DataSet to bind to the data, move through the data, and
navigate data relationships
Query ability
DFD SYMBOLS:
In the DFD, there are four symbols
1. A square defines a source(originator) or destination of system data
2. An arrow identifies data flow. It is the pipeline through which the information
flows
3. A circle or a bubble represents a process that transforms incoming data flow
into outgoing data flows.
4. An open rectangle is a data store, data at rest or a temporary repository of data
Data flow
Data Store
CONSTRUCTING A DFD:
Several rules of thumb are used in drawing DFD’S:
1. Process should be named and numbered for an easy reference. Each name
should be representative of the process.
2. The direction of flow is from top to bottom and from left to right. Data
traditionally flow from source to the destination although they may flow back to
the source. One way to indicate this is to draw long flow line back to a source.
An alternative way is to repeat the source symbol as a destination. Since it is
used more than once in the DFD it is marked with a short diagonal.
3. When a process is exploded into lower level details, they are numbered.
4. The names of data stores and destinations are written in capital letters. Process
and dataflow names have the first letter of each work capitalized
A DFD typically shows the minimum contents of data store. Each data store
should contain all the data elements that flow in and out.
Questionnaires should contain all the data elements that flow in and out.
Missing interfaces redundancies and like is then accounted for often through
interviews.
1. Current Physical
2. Current Logical
3. New Logical
4. New Physical
CURRENT PHYSICAL:
In Current Physical DFD process label include the name of people or their
positions or the names of computer systems that might provide some of the
overall system-processing label includes an identification of the technology used to
process the data. Similarly data flows and data stores are often labels with the
names of the actual physical media on which data are stored such as file folders,
computer files, business forms or computer tapes.
CURRENT LOGICAL:
The physical aspects at the system are removed as mush as possible so that
the current system is reduced to its essence to the data and the processors that
transforms them regardless of actual physical form.
NEW LOGICAL:
This is exactly like a current logical model if the user were completely happy
with he user were completely happy with the functionality of the current system
but had problems with how it was implemented typically through the new logical
model will differ from current logical model while having additional functions,
absolute function removal and inefficient flows recognized.
NEW PHYSICAL:
The new physical represents only the physical implementation of the new
system.
RULES GOVERNING THE DFD’S
PROCESS
DATA STORE
1) Data cannot move directly from one data store to another data store, a
process must move data.
2) Data cannot move directly from an outside source to a data store, a
process, which receives, must move data from the source and place the data
into data store
3) A data store has a noun phrase label.
SOURCE OR SINK
The origin and / or destination of data.
DATA FLOW
1) A Data Flow has only one direction of flow between symbols. It may flow
in both directions between a process and a data store to show a read before an
update. The later is usually indicated however by two separate arrows since
these happen at different type.
2) A join in DFD means that exactly the same data comes from any of two or
more different processes data store or sink to a common location.
3) A data flow cannot go directly back to the same process it leads. There
must be atleast one other process that handles the data flow produce some
other data flow returns the original data into the beginning process.
4) A Data flow to a data store means update (delete or change).
5) A data Flow from a data store means retrieve or use.
A data flow has a noun phrase label more than one data flow noun phrase can
appear on a single arrow as long as all of the flows on the same arrow move
together as one package.
Introduction
In general, software engineers distinguish software faults from software failures. In
case of a failure, the software does not do what the user expects. A fault is a
programming error that may or may not actually manifest as a failure. A fault can
also be described as an error in the correctness of the semantic of a computer
program. A fault will become a failure if the exact computation conditions are met,
one of them being that the faulty portion of computer software executes on the
CPU. A fault can also turn into a failure when the software is ported to a different
hardware platform or a different compiler, or when the software gets extended.
Software testing is the technical investigation of the product under test to provide
stakeholders with quality related information.
White box and black box testing are terms used to describe the point of view a test
engineer takes when designing test cases. Black box being an external view of the
test object and white box being an internal view. Software testing is partly
intuitive, but largely systematic. Good testing involves much more than just
running the program a few times to see whether it works. Thorough analysis of the
program under test, backed by a broad knowledge of testing techniques and tools
are prerequisites to systematic testing. Software Testing is the process of
executing software in a controlled manner; in order to answer the question “Does
this software behave as specified?” Software testing is used in association with
Verification and Validation. Verification is the checking of or testing of items,
including software, for conformance and consistency with an associated
specification. Software testing is just one kind of verification, which also uses
techniques as reviews, inspections, walk-through. Validation is the process of
checking what has been specified is what the user actually wanted.
• Validation: Are we doing the right job?
• Verification: Are we doing the job right?
In order to achieve consistency in the Testing style, it is imperative to have
and follow a set of testing principles. This enhances the efficiency of testing within
SQA team members and thus contributes to increased productivity. The purpose of
this document is to provide overview of the testing, plus the techniques.
At SDEI, 3 levels of software testing is done at various SDLC phases
• Unit Testing: in which each unit (basic component) of the software is tested
to verify that the detailed design for the unit has been correctly implemented
• Integration testing: in which progressively larger groups of tested software
components corresponding to elements of the architectural design are
integrated and tested until the software works as a whole.
• System testing: in which the software is integrated to the overall product and
tested to show that all requirements are met
A further level of testing is also done, in accordance with requirements:
• Acceptance testing: upon which the acceptance of the complete software is
based. The clients often do this.
• Regression testing: is used to refer the repetition of the earlier successful
tests to ensure that changes made in the software have not introduced new
bugs/side effects.
In recent years the term grey box testing has come into common usage. The
typical grey box tester is permitted to set up or manipulate the testing
environment, like seeding a database, and can view the state of the product after
his actions, like performing a SQL query on the database to be certain of the
values of columns. It is used almost exclusively of client-server testers or others
who use a database as a repository of information, but can also apply to a tester
who has to manipulate XML files (DTD or an actual XML file) or configuration files
directly. It can also be used of testers who know the internal workings or algorithm
of the software under test and can write tests specifically for the anticipated
results. For example, testing a data warehouse implementation involves loading
the target database with information, and verifying the correctness of data
population and loading of data into the correct tables.
Test levels
• Unit testing tests the minimal software component and sub-component or
modules by the programmers.
• Integration testing exposes defects in the interfaces and interaction between
integrated components (modules).
• Functional testing tests the product according to programmable work.
• System testing tests an integrated system to verify/validate that it meets its
requirements.
• Acceptance testing testing can be conducted by the client. It allows the end-
user or customer or client to decide whether or not to accept the product.
Acceptance testing may be performed after the testing and before the
implementation phase. See also Development stage
o Alpha testing is simulated or actual operational testing by potential
users/customers or an independent test team at the developers' site.
Alpha testing is often employed for off-the-shelf software as a form of
internal acceptance testing, before the software goes to beta testing.
o Beta testing comes after alpha testing. Versions of the software, known
as beta versions, are released to a limited audience outside of the
company. The software is released to groups of people so that further
testing can ensure the product has few faults or bugs. Sometimes,
beta versions are made available to the open public to increase the
feedback field to a maximal number of future users.
It should be noted that although both Alpha and Beta are referred to as
testing it is in fact use emersion. The rigors that are applied are often
unsystematic and many of the basic tenets of testing process are not used. The
Alpha and Beta period provides insight into environmental and utilization
conditions that can impact the software.
After modifying software, either for a change in functionality or to fix defects,
a regression test re-runs previously passing tests on the modified software to
ensure that the modifications haven't unintentionally caused a regression of
previous functionality. Regression testing can be performed at any or all of the
above test levels. These regression tests are often automated.
Test cases, suites, scripts and scenarios
A test case is a software testing document, which consists of event, action,
input, output, expected result and actual result. Clinically defined (IEEE 829-1998)
a test case is an input and an expected result. This can be as pragmatic as 'for
condition x your derived result is y', whereas other test cases described in more
detail the input scenario and what results might be expected. It can occasionally
be a series of steps (but often steps are contained in a separate test procedure
that can be exercised against multiple test cases, as a matter of economy) but
with one expected result or expected outcome. The optional fields are a test case
ID, test step or order of execution number, related requirement(s), depth, test
category, author, and check boxes for whether the test is automatable and has
been automated. Larger test cases may also contain prerequisite states or steps,
and descriptions. A test case should also contain a place for the actual result.
These steps can be stored in a word processor document, spreadsheet, database
or other common repository. In a database system, you may also be able to see
past test results and who generated the results and the system configuration used
to generate those results. These past results would usually be stored in a separate
table.
The term test script is the combination of a test case, test procedure and test data.
Initially the term was derived from the byproduct of work created by automated
regression test tools. Today, test scripts can be manual, automated or a
combination of both.
The most common term for a collection of test cases is a test suite. The test
suite often also contains more detailed instructions or goals for each collection of
test cases. It definitely contains a section where the tester identifies the system
configuration used during testing. A group of test cases may also contain
prerequisite states or steps, and descriptions of the following tests.
Collections of test cases are sometimes incorrectly termed a test plan. They might
correctly be called a test specification. If sequence is specified, it can be called a
test script, scenario or procedure.
A sample testing cycle
Although testing varies between organizations, there is a cycle to testing:
1. Requirements Analysis: Testing should begin in the requirements phase of
the software development life cycle.
During the design phase, testers work with developers in determining what
aspects of a design are testable and under what parameter those tests work.
2. Test Planning: Test Strategy, Test Plan(s), Test Bed creation.
3. Test Development: Test Procedures, Test Scenarios, Test Cases, Test Scripts
to use in testing software.
4. Test Execution: Testers execute the software based on the plans and tests
and report any errors found to the development team.
5. Test Reporting: Once testing is completed, testers generate metrics and
make final reports on their test effort and whether or not the software tested
is ready for release.
6. Retesting the Defects
Not all errors or defects reported must be fixed by a software development team.
Some may be caused by errors in configuring the test software to match the
development or production environment. Some defects can be handled by a
workaround in the production environment. Others might be deferred to future
releases of the software, or the deficiency might be accepted by the business user.
There are yet other defects that may be rejected by the development team (of
course, with due reason) if they deem it inappropriate to be called a defect.
7.3 IMPLEMENTATION
Implementation is the process of converting a new or revised system design
into operational one. There are three types of Implementation:
Implementation of a computer system to replace a manual system. The
problems encountered are converting files, training users, and verifying
printouts for integrity.
Implementation of a new computer system to replace an existing one. This
is usually a difficult conversion. If not properly planned there can be many
problems.
Implementation of a modified application to replace an existing one using
the same computer. This type of conversion is relatively easy to handle,
provided there are no major changes in the files.
Implementation in Generic tool project is done in all modules. In the
first module User level identification is done. In this module every user is
identified whether they are genuine one or not to access the database and
also generates the session for the user. Illegal use of any form is strictly
avoided.
In the Table creation module, the tables are created with user specified
fields and user can create many table at a time. They may specify
conditions, constraints and calculations in creation of tables. The Generic
code maintain the user requirements through out the project.
In Updating module user can update or delete or Insert the new record
into the database. This is very important module in Generic code project.
User has to specify the filed value in the form then the Generic tool
automatically gives whole filed values for that particular record.
In Reporting module user can get the reports from the database in
2Dimentional or 3Dimensional view. User has to select the table and specify
the condition then the report will be generated for the user.
9.1. INTRODUCTION
The protection of computer based resources that includes hardware,
software, data, procedures and people against unauthorized use or natural
DATA SECURITY is the protection of data from loss, disclosure, modification and
destruction.
Various client side validations are used to ensure on the client side that only
valid data is entered. Client side validation saves server time and load to handle
invalid data. Some checks imposed are:
• VBScript in used to ensure those required fields are filled with suitable data
only. Maximum lengths of the fields of the forms are appropriately defined.
• Forms cannot be submitted without filling up the mandatory data so that
manual mistakes of submitting empty fields that are mandatory can be sorted
out at the client side to save the server time and load.
• Tab-indexes are set according to the need and taking into account the ease of
user while working with the system.
• Server side constraint has been imposed to check for the validity of primary key
and foreign key. A primary key value cannot be duplicated. Any attempt to
duplicate the primary value results into a message intimating the user about
those values through the forms using foreign key can be updated only of the
existing foreign key values.
• User is intimating through appropriate messages about the successful
operations or exceptions occurring at server side.
• Various Access Control Mechanisms have been built so that one user may not
agitate upon another. Access permissions to various types of users are
controlled according to the organizational structure. Only permitted users can
log on to the system and can have access according to their category. User-
name, passwords and permissions are controlled o the server side.
• Using server side validation, constraints on several restricted operations are
imposed.
It has been a great pleasure for me to work on this exciting and challenging
project. This project proved good for me as it provided practical knowledge of not
only programming in VB.NET windows based application, but also about all
handling procedure related with “Greyhound Fleet Manager”. It also provides
knowledge about the latest technology used in developing client server technology
that will be great demand in future. This will provide better opportunities and
guidance in future in developing projects independently.
BENEFITS:
The project is identified by the merits of the system offered to the user. The merits
of this project are as follows: -
• This project offers user to enter the data through simple and interactive forms.
This is very helpful for the client to enter the desired information through so
much simplicity.
• The user is mainly more concerned about the validity of the data, whatever he
is entering. There are checks on every stages of any new creation, data entry or
updation so that the user cannot enter the invalid data, which can create
problems at later date.
• Sometimes the user finds in the later stages of using project that he needs to
update some of the information that he entered earlier. There are options for
him by which he can update the records. Moreover there is restriction for his
that he cannot change the primary data field. This keeps the validity of the data
to longer extent.
• User is provided the option of monitoring the records he entered earlier. He can
see the desired records with the variety of options provided by him.
• From every part of the project the user is provided with the links through
framing so that he can go from one option of the project to other as per the
requirement. This is bound to be simple and very friendly as per the user is
concerned. That is, we can sat that the project is user friendly which is one of
the primary concerns of any good project.
• Data storage and retrieval will become faster and easier to maintain because
data is stored in a systematic manner and in a single database.
• Decision making process would be greatly enhanced because of faster
processing of information since data collection from information available on
computer takes much less time then manual system.
• Allocating of sample results becomes much faster because at a time the user
can see the records of last years.
• Easier and faster data transfer through latest technology associated with the
computer and communication.
• Through these features it will increase the efficiency, accuracy and
transparency,
LIMITATIONS:
• Training for simple computer operations is necessary for the users working on
the system.
• Online payments can be included in future.
• Managing prospective dealers and to avail discounts and offers to fetch more
profits. Managing these discounts and offers can be incorporated in future.
• Integrating GPS with the application which tracks the current vehicle
information which includes its location, speed etc., which can be very much
useful in certain instances when the vehicle is missing for some reason.
• Incorporating auto alert system to send vehicle for servicing.
• FOR .NET INSTALLATION
www.support.mircosoft.com
• FOR DEPLOYMENT AND PACKING ON SERVER
www.developer.com
www.15seconds.com
Senn
Robert Pressman