Beruflich Dokumente
Kultur Dokumente
0
Project Document
Master Test Plan
Author: Web Group Test Manager
Creation Date: August 17, 1999
Last Updated:
Version: 1.0
Approvals:
____________________________ _______
Quality Control Date
Test Manager
____________________________ _______
WinRunner Core Date
R&D Manager
Change Record
Date
17-Aug-99
Author
Version
1.0
Change Reference
No previous document
Introduction
Disclaimer
This document does not enforce a testing methodology or recommend specific hardware
or software. It is an example of a typical master test plan implementation for a Web
application testing project. The example in this document is based on the experience of
the Mercury Interactive Quality Control team and Mercury Interactive's large customer
base. The purpose of this example is to help new Web testing teams quickly jump into the
testing process.
System Overview
Flights is a sample application used to demonstrate the capabilities of Mercury
Interactive's testing tools. It provides a simple model of a flights reservation system. This
application is used for pre-sales demonstrations, tutorials and training programs. It should
be installed on the Web server. All data processed by the application is maintained in the
ASCII files in the Web server file system.
Flights is installed at the corporate Web server and accessible via Internet by the
application engineers. Note that Flights periodically undergoes changes in order to
demonstrate the new capabilities of Mercury Interactive's testing tools.
Purpose
This document describes a framework for testing the Flights application. It defines the
stages of the testing process and a schedule. It also includes a methodology and
techniques used to develop and execute tests. A Master Test Plan will also be used as a
baseline for verifying and auditing the testing process
Tested Features
The testing process will cover the following functional areas:
Installation
User interface
Order management
The following operational aspects will be addressed:
Simultaneous users support
Security
Performance
Recovery will not be covered in the testing process. The reason for skipping this feature
is in the intended scope of use of the application.
Testing Types
A detailed test plan at subsystem and system levels will be developed using functional
testing techniques.
1. Specification-based testing
Subsystem testing starts from the features evaluation. Its purpose is to explore the system
in order to learn its functions via the user interface. The features evaluation enables you
to sanity check the conceptual design of the system. It also substitutes formal usability
testing allowing a test engineer to provide feedback on the usability issues.
The major approach for the subsystem testing is specification-based testing. The
specification should be decomposed to the functions level. Domains of the function
parameters values should be identified. Environmental dependencies of the application
should be analyzed. Decomposition and analysis should take into account experience
gathered in the feature evaluation stage. Test requirements should be derived as a result of
the functional decomposition and environmental analysis. The test cases should be
designed following the test requirements. The general rule for test case design is to
combine 2-3 functions and 6-7 requirements. Test data and test procedures are developed
according to the test design.
The subsystem tests should be conducted for the following initial conditions:
initial use of the application when the orders database is still empty.
a certain volume of data is achieved.
2. User scenarios testing
System testing will be based on the user scenario's technique. User scenarios will be
derived from the WinRunner and LoadRunner tutorial and training materials. Test data
for the user scenarios should be selected according to the users profiles where major
profiles are a computer specialist and a business user.
3. Operational testing
Testing Tools
The following testing tools can be used to manage and automate the testing process for
the Flights application:
TestDirector
Flights specific tests will be kept as part of the existing WinRunner test project except for
performance tests. Performance tests will be part of the existing LoadRunner test project.
Results of the test execution and discovered defects will also be kept in these projects.
The TestDirector Web template database will be used as a source for the detailed
test plan development.
WinRunner with WebTest Add-in
WinRunner can be used to automate system tests, browser matrix and regression tests.
These test types have a high number of common scenarios so high reusability can be
achieved.
The testing of the Flights Order handling mechanism should be done using the
WinRunner Data Driver so that multiple data sets will be entered by a single scenario.
Testing process
The testing process summary is shown in the diagram below:
Subsystem tests includes design, automation and execution stages that partially overlap
on the time axis. Feature testing can combine functional testing and aspects specific for
Web applications like Web pages quality. When initial stability of the application
functionality is achieved, browser dependent behavior of the application can be explored.
System tests include stages similar to the subsystem testing phase. Design of the system
test starts when certain stability is achieved during execution of the subsystem test. The
first test cycle starts concurrently with the last subsystem test cycles. System tests should
combine user scenarios planning and execution, and operational testing. Operational
testing should be done in the environment close to production. The production
environment will be built close to release. If operational testing is moved closer to
release, it can use this environment. The testing process is completed by the release
procedure. The exact entrance/exit criteria for every stage are described below. All
communication issues between departments during entrance/exit of corresponding stages
are described in the Communication Procedures and Vehicles document.
Entrance/Exit Criteria
1. Subsystem testing
1. Features evaluation.
The features evaluation process starts as soon as the feature is integrated with the Flights
application and is released to the Quality Control team. The feature acceptance condition
is a sanity test conducted by the developer. The sanity test should confirm that a user is
able to activate the feature for at least one hour while no defects that limit access to the
functionality are observed (e.g. crashes, assertions).
The features evaluation process ends when a test engineer evaluates every feature
function and confirms conceptual validity of the feature design. The process ends when a
final list of corrective actions is defined and approved by Quality Control team leader and
developer responsible for the feature.
2. Test requirements development
Test requirements development starts when the features evaluation process confirms the
conceptual validity of the feature design. The scope of the changes should be defined and
approved. Afterwards, development of the requirements can start for the stable functions.
Test requirements can also be developed as a means of the feature design validation.
Test requirements development ends when all functions and parameters of the feature
selected for testing (see Tested Features paragraph) are covered by the requirements. The
process ends when requirements are reviewed and approved by the Quality Control team
leader.
3. Test case design
Test case design starts when requirements for a feature are approved. Another condition is
the stability of the design. This means that the design is approved and no major changes
are expected.
Test case design ends when all test requirements selected for the testing are covered by
the test cases. The test cases design process is completed when all the test cases are
reviewed and approved by the Quality Control team leader.
4. Automation
The automation process starts when feature functions selected for automation are stable.
Stability means that there are such navigation flows in the tested application that a
function could be activated by the user and execution of the paths allows permanent
functions activation for at least one hour. These navigation flows should be easily
identified, i.e. identification of the flow does not take more than 5 minutes.
Automation stops when all planned test scripts are finally implemented. A scope of the
automated test scripts should be available in the Testing Tools Implementation document.
Final implementation means that the scripts are reviewed by a senior test engineer and
approved by the Quality Control team leader.
5. Test cycle
A test cycle starts when all test cases selected for the test cycle are implemented and can
be found in the TestDirector repository. The tested application should pass an R&D sanity
check proving that the basic functionality of the application is stable, i.e. the application
can be activated for at least hour without major failures.
The test cycle is stopped either when all selected test cases for the cycle were executed or
a cycle is suspended according to the suspension criteria (see the Suspension Criteria
paragraph).
2. System testing
1. User scenarios
User scenario development starts when the first subsystem test cycle is completed and no
major changes are expected as results of the cycle. Another condition is the availability of
tutorial and training materials.
User scenario development ends when criteria described in the Testing Strategy and
Approach section, paragraph System testing is achieved. The stage is exited when
users' scenarios are reviewed by a senior test engineer and an education group manager,
and are approved by the Quality Control team leader.
2. Test case design
Test case design starts when user scenarios are selected for implementation reviewed and
approved.
Test case design ends when all user scenarios selected for the implementation are covered
by the test cases. The test case design process is exited when all the test cases are
reviewed and approved by the Quality Control team leader.
3. Automation
The automation process starts when no major changes in the user model and interface of
the tested application are expected. Infrastructure of the subsystem automated test suite
should be completed. User scenarios selected for automation should be accessible for the
user.
Automation ends when all planned test scripts are finally implemented. A scope of the
automated test scripts should be available in the Testing Tools Implementation document.
Final implementation means that the scripts are reviewed by a senior test engineer and
approved by the Quality Control team leader.
4. Test cycle
See 1.4 above.
3. Web specific testing
1. Test requirements
Test requirements development starts when subsystem testing confirms the conceptual
validity of the application design (at least at the feature level). The final list of supported
Web browsers, servers and operating systems is provided by the product-marketing
manager.
Test requirements development ends when analysis of the Web specific application
aspects does not bring any additional testing points. Another criteria can be the completed
coverage of the Web testing checklist provided as a template with TestDirector 6.0.
The process is exited when requirements are reviewed and approved by Quality Control
team leader.
2. Test case design
Test case design starts when Web specific test requirements are approved. Another
condition for starting the design process is reliability of the application estimated by the
subsystem test execution. The mean time between failures should reach at least 30
minutes (average execution duration of a test scenario).
Test case design stops when all test requirements selected for testing are covered by the
test cases. The test case design process ends when all the test cases are reviewed and
approved by the Quality Control team leader.
3. Automation
The automation process starts when no major changes in the user model and interface of
the tested application are expected. Infrastructure of the subsystem automated test suite
should be completed. Test cases selected for automation should be accessible to the user.
Automation stops when all planned test scripts are finally implemented. The scope of the
automated test scripts should be available in the Testing Tools Implementation
document. Final implementation means that the scripts are reviewed by a senior test
engineer and approved by the Quality Control team leader.
4. Test cycle
See 1.4 above.
1. Subsystem testing
1. Feature evaluation report.
This document should include comments on the conceptual module of the feature. It
covers usability issues and compatibility of the feature with the rest of the system
2. List of corrective actions.
This document should describe required changes in the feature design or implementation.
3. Test requirements list.
A set of test requirement documents in HTML or Microsoft Word formats. The test
requirements addressing the same feature should be grouped together either in one
document or using the Word/HTML hyperlinks mechanism.
4. Test cases design
A detailed manual test plan in the TestDirector repository.
5. WinRunner test suite
A set of WinRunner test scripts structured and stored according to the Test Standards
document. The tests should be registered in the TestDirector repository.
6. Test cycle
A test set in TestDirector, which includes run results with the links to the discovered
defects.
2. System testing
1. User scenarios
A set of manual tests without steps in the TestDirector repository. The description of each
test should describe the user profile and the source of the scenario (e.g. reference to the
corresponding tutorial chapter).
2. Test cases design
Detailed manual test plan in the TestDirector repository
3. WinRunner test suite
A set of WinRunner test scripts structured and stored according to the Test Standards
document. The tests should be registered in the TestDirector repository.
4. LoadRunner test suite
A set of LoadRunner scenarios and Web virtual users built according to demo and
training materials and registered in TestDirector.
5. Test cycle
A test set in TestDirector, which includes runs results with the links to the discovered
defects.
3. Web specific testing
1. Test requirements list
A set of test requirement documents in HTML or Microsoft Word formats. Test
requirements addressing the same feature should be grouped together either in one
document or using the Word/HTML hyperlinks mechanism.
2. Test cases design
A detailed manual test plan in the TestDirector repository.
3. WinRunner test suite
A set of WinRunner test scripts structured and stored according to the Test Standards
document. The tests should be registered in the TestDirector repository.
4. Test cycle
A test set in TestDirector that includes run results with the links to the discovered defects.
4. Build notification
A description of the software passed to quality control for validation. It includes the build
objective, summary of the changes, points of interest for quality control and a detailed list
of the changes from the configuration management system.
5. Product status
A periodic summary of the product status including a general impression, a list of the
major problems and test process progress data.
6. Release status
Release documentation is described in the Release Procedure document.
Test Environment
Hardware
The testing will be done using the following PC configuration.
A PC for each team member (type A)
A server (both file and database)
A Web server
2 PCs for automated test execution and configuration testing (type B)
These PCs should have replaceable drawers. There should be the ability to fill the
drawers with a clean environment configuration that is described in the paragraph below.
All PCs should be connected by LAN based on the protocol TCP/IP.
Workstations should have the following hardware configuration:
CPU Pentium II 266 MHz
RAM 64 MB
Hard disk 4 GB
Servers should have the following hardware configuration:
CPU Pentium II 333 MHz
RAM 256 MB
Hard disk 9 GB
A Web server should be accessed via a firewall Nokia IP 330. The firewall should be
configured using the Security requirements specification, which is a part of the project
documentation.
Software
Two type A PCs and one type B PC should have dual boot including Windows 98 and
Windows NT in the following configurations:
OS
Build
Service Pack
Windows NT 4.0
1381
SP5
Windows 98
The rest of type A PCs and one type B PC should have dual boot including Windows 95
and Windows NT in the following configurations:
OS
Build
Service Pack
Windows NT 4.0
1381
SP4
Windows 95
950
This team provides the testing project with the work plan, procedures and guidelines. It is
responsible for monitoring progress and how corrective actions are implemented.
1. WR Core and LoadRunner Web R&D Managers
The WinRunner Core and LoadRunner Web R&D Managers provide the testing team
with the project schedule. They monitor the Defect Tracking System and assign tasks for
correcting defects. They also approve the testing process deliverables.
2. Quality Control Manager
The Quality Control Manager approves the completion of the testing process stages and
deliverables of these stages. In addition the Quality Control Manager coordinates interteam communication.
Schedule
This section chronologically lists the activities and milestones of the testing process. The
project will start on June 6, 1999. The project deadline (release to manufacturing of
WinRunner and LoadRunner 6.0) is September 15, 1999.
The schedule takes into account that part of the subsystem and system tests should be
repeated before the release, because WinRunner and LoadRunner can change since the
last cycle.
ID
Task
Responsibility
Start
Date
Duration
(days)
Subsystem test
Planning
QC test leader
6-Jun-99
Features evaluation
QC team leader
9-Jun-99
10-Jun99
14-Jun99
Test engineer
15-Jun99
QC manager
17-Jun99
Automation expert
16-Jun99
Automation expert
20-Jun99
QC team leader
23-Jun99
0.5
6
Detailed test plan review
Automation
7
Test Suite design
8
Test Suite implementation
9
Test Suite review
Execution
10
Cycle preparation
QC team leader
25-Jul-99
0.5
11
Cycle 1
26-Jul-99
12
Cycle 2
1-Aug-99
13
QC manager
9-Aug-99
0.5
14
Cycle 3
5-Sep-99
Planning
15
Web expert
20-Jun99
16
21-Jun99
17
22-Jun99
Execution
18
Cycle preparation
QC team leader
2-Aug-99
0.5
19
Cycle 1
3-Aug-99
20
Cycle 2
8-Aug-99
System test
Planning
21
18-Jul-99
22
19-Jul-99
0.5
23
20-Jul-99
24
QC team leader
22-Jul-99
0.5
Automation expert
21-Jul-99
0.5
Automation expert
22-Jul-99
Automation
25
Test Suite design
26
Test Suite implementation
27
QC team leader
26-Jul-99
0.5
Execution
28
Cycle preparation
QC team leader
8-Aug-99
0.2
29
Cycle 1
8-Aug-99
30
Cycle 2
11-Aug99
24
QC manager
12-Aug99
0.5
25
Cycle 3
QC team leader
5-Sep-99
26
Release
QC team leader
8-Sep-99
0.5
Dependencies between the tasks are defined in the Entrance/Exit Criteria paragraph.
If a new ODBC version is released, compliance of the Flights application to this version
should be verified and positive or negative results should be mentioned in the release
notes. Compliance verification will require additional resources that must be provided by
the Customer Support Organization (CSO).
If tutorial or training materials are not available during the system-testing phase, the
technical documentation and education team will need to provide the user scenarios for
the Quality Control team.
If the application is available to the customers in addition to sales engineers, scalability
problems may raise. We can include scalability testing as part of the operational testing.
The results will define our approach to this risk.