Sie sind auf Seite 1von 16

Flights Reservation 1.

0
Project Document
Master Test Plan
Author: Web Group Test Manager
Creation Date: August 17, 1999
Last Updated:
Version: 1.0

Approvals:
____________________________ _______
Quality Control Date
Test Manager
____________________________ _______
WinRunner Core Date
R&D Manager

Change Record

Date

17-Aug-99

Author

Web Group Test Manager

Version

1.0

Change Reference

No previous document

Introduction
Disclaimer
This document does not enforce a testing methodology or recommend specific hardware
or software. It is an example of a typical master test plan implementation for a Web
application testing project. The example in this document is based on the experience of
the Mercury Interactive Quality Control team and Mercury Interactive's large customer
base. The purpose of this example is to help new Web testing teams quickly jump into the
testing process.

System Overview
Flights is a sample application used to demonstrate the capabilities of Mercury
Interactive's testing tools. It provides a simple model of a flights reservation system. This
application is used for pre-sales demonstrations, tutorials and training programs. It should
be installed on the Web server. All data processed by the application is maintained in the
ASCII files in the Web server file system.
Flights is installed at the corporate Web server and accessible via Internet by the
application engineers. Note that Flights periodically undergoes changes in order to
demonstrate the new capabilities of Mercury Interactive's testing tools.

Purpose
This document describes a framework for testing the Flights application. It defines the
stages of the testing process and a schedule. It also includes a methodology and
techniques used to develop and execute tests. A Master Test Plan will also be used as a
baseline for verifying and auditing the testing process

Tested Features
The testing process will cover the following functional areas:
Installation
User interface
Order management
The following operational aspects will be addressed:
Simultaneous users support
Security
Performance

Features Not Tested

Recovery will not be covered in the testing process. The reason for skipping this feature
is in the intended scope of use of the application.

Testing Strategy and Approach


Levels
The first step in testing the Flights application is to consider the functionality available to
the user, the application's accessibility via the Web, and its presentation in the Web
browser.
The functionality of the Flights application will be tested at the subsystem (features) and
system levels. Testing at the features level should be extensive because of the frequent
low level changes caused by Web technology updates reflected in the application.
The purpose of the system test in the document is to ensure that the application fits
WinRunner and LoadRunner tutorial and training materials. This means that the flows in
the application described in these materials are highly reliable.
Web aspects of the Flights application should be covered by the operational testing and
Web specific tests.

Testing Types
A detailed test plan at subsystem and system levels will be developed using functional
testing techniques.
1. Specification-based testing
Subsystem testing starts from the features evaluation. Its purpose is to explore the system
in order to learn its functions via the user interface. The features evaluation enables you
to sanity check the conceptual design of the system. It also substitutes formal usability
testing allowing a test engineer to provide feedback on the usability issues.
The major approach for the subsystem testing is specification-based testing. The
specification should be decomposed to the functions level. Domains of the function
parameters values should be identified. Environmental dependencies of the application
should be analyzed. Decomposition and analysis should take into account experience
gathered in the feature evaluation stage. Test requirements should be derived as a result of
the functional decomposition and environmental analysis. The test cases should be
designed following the test requirements. The general rule for test case design is to
combine 2-3 functions and 6-7 requirements. Test data and test procedures are developed
according to the test design.
The subsystem tests should be conducted for the following initial conditions:
initial use of the application when the orders database is still empty.
a certain volume of data is achieved.
2. User scenarios testing
System testing will be based on the user scenario's technique. User scenarios will be
derived from the WinRunner and LoadRunner tutorial and training materials. Test data
for the user scenarios should be selected according to the users profiles where major
profiles are a computer specialist and a business user.
3. Operational testing

Operational testing should address two areas: performance and security.


3.1. Performance testing
Current assumption is that a maximum of 20 field application engineers can approach the
Flights application concurrently for the demos. The purpose of the application
performance test and the site is to ensure that all application responses lay in the range of
3 seconds when demo scenarios are executed concurrently.
3.2. Security testing
Security tests should ensure that access to the Flights application is limited to the
application only and prevents access to the corporate Web site and corporate server
resources. The test environment should be defined to precisely emulate the configuration
of the production Web server. It should include the firewall configured according to the
specification defined in the Security Requirements document that is part of the
development documentation.
An IP spoofing technique should be used to validate the firewall configuration.
4. Browser Matrix
Browser dependent tests will be implemented via distribution of the selected subsystem
tests in the browser matrix. These tests will be selected to cover navigation through
application pages and activation of the page controls. The tests should be repeated for
Microsoft Internet Explorer and Netscape Navigator. Different versions of these browsers
should be covered. The versions should be selected according to their popularity.
5. Regression Testing
The purpose of regression testing is to ensure that the functionality of the tested
application is not broken due to changes in the application. Regression tests re-use
functional subsystem and system tests. Functional test results will be captured and
reviewed. The results certified as correct will be kept in the regression test expected
results baseline. Regression tests will also be based on the browser matrix distribution.
6. Production Roll-out
The final test ensures that the application is installed properly at the production Web
server and accessible via the Internet.

Testing Tools
The following testing tools can be used to manage and automate the testing process for
the Flights application:
TestDirector
Flights specific tests will be kept as part of the existing WinRunner test project except for
performance tests. Performance tests will be part of the existing LoadRunner test project.
Results of the test execution and discovered defects will also be kept in these projects.
The TestDirector Web template database will be used as a source for the detailed
test plan development.
WinRunner with WebTest Add-in
WinRunner can be used to automate system tests, browser matrix and regression tests.
These test types have a high number of common scenarios so high reusability can be
achieved.
The testing of the Flights Order handling mechanism should be done using the
WinRunner Data Driver so that multiple data sets will be entered by a single scenario.

If application stability is low due to frequent changes, test automation should be


shifted to the functional tests.
LoadRunner (virtual users will be generated using Web protocol).
LoadRunner should be used for performance testing of the central Web site running the
Flights application.
Detailed testing tools' implementation plan in the testing process framework should be
provided in a separate Tool Implementation Plan document.

Test Case Success/Fail Criteria


A test case succeeds if all its steps are executed in all selected environments, and there is
no difference between expected results and the actual application behavior and output.
This means either an explicit definition of the expected behavior and output of the
application or the conclusion of the test engineer that actual behavior and application
output are acceptable according to his/her experience as a domain expert.
The test case fails if at least one executed step failed. A step fails if there is a difference
between expected results and the actual application behavior and output. The difference
could be explicit or implicit as described above. The test fails even if the difference is
caused by a known defect already registered in the companys defect tracking system.

Testing process
The testing process summary is shown in the diagram below:

Subsystem tests includes design, automation and execution stages that partially overlap
on the time axis. Feature testing can combine functional testing and aspects specific for
Web applications like Web pages quality. When initial stability of the application
functionality is achieved, browser dependent behavior of the application can be explored.
System tests include stages similar to the subsystem testing phase. Design of the system
test starts when certain stability is achieved during execution of the subsystem test. The
first test cycle starts concurrently with the last subsystem test cycles. System tests should
combine user scenarios planning and execution, and operational testing. Operational
testing should be done in the environment close to production. The production
environment will be built close to release. If operational testing is moved closer to
release, it can use this environment. The testing process is completed by the release
procedure. The exact entrance/exit criteria for every stage are described below. All
communication issues between departments during entrance/exit of corresponding stages
are described in the Communication Procedures and Vehicles document.

Entrance/Exit Criteria
1. Subsystem testing
1. Features evaluation.
The features evaluation process starts as soon as the feature is integrated with the Flights
application and is released to the Quality Control team. The feature acceptance condition
is a sanity test conducted by the developer. The sanity test should confirm that a user is
able to activate the feature for at least one hour while no defects that limit access to the
functionality are observed (e.g. crashes, assertions).
The features evaluation process ends when a test engineer evaluates every feature
function and confirms conceptual validity of the feature design. The process ends when a
final list of corrective actions is defined and approved by Quality Control team leader and
developer responsible for the feature.
2. Test requirements development
Test requirements development starts when the features evaluation process confirms the
conceptual validity of the feature design. The scope of the changes should be defined and
approved. Afterwards, development of the requirements can start for the stable functions.
Test requirements can also be developed as a means of the feature design validation.
Test requirements development ends when all functions and parameters of the feature
selected for testing (see Tested Features paragraph) are covered by the requirements. The
process ends when requirements are reviewed and approved by the Quality Control team
leader.
3. Test case design
Test case design starts when requirements for a feature are approved. Another condition is
the stability of the design. This means that the design is approved and no major changes
are expected.
Test case design ends when all test requirements selected for the testing are covered by
the test cases. The test cases design process is completed when all the test cases are
reviewed and approved by the Quality Control team leader.
4. Automation

The automation process starts when feature functions selected for automation are stable.
Stability means that there are such navigation flows in the tested application that a
function could be activated by the user and execution of the paths allows permanent
functions activation for at least one hour. These navigation flows should be easily
identified, i.e. identification of the flow does not take more than 5 minutes.
Automation stops when all planned test scripts are finally implemented. A scope of the
automated test scripts should be available in the Testing Tools Implementation document.
Final implementation means that the scripts are reviewed by a senior test engineer and
approved by the Quality Control team leader.
5. Test cycle
A test cycle starts when all test cases selected for the test cycle are implemented and can
be found in the TestDirector repository. The tested application should pass an R&D sanity
check proving that the basic functionality of the application is stable, i.e. the application
can be activated for at least hour without major failures.
The test cycle is stopped either when all selected test cases for the cycle were executed or
a cycle is suspended according to the suspension criteria (see the Suspension Criteria
paragraph).
2. System testing
1. User scenarios
User scenario development starts when the first subsystem test cycle is completed and no
major changes are expected as results of the cycle. Another condition is the availability of
tutorial and training materials.
User scenario development ends when criteria described in the Testing Strategy and
Approach section, paragraph System testing is achieved. The stage is exited when
users' scenarios are reviewed by a senior test engineer and an education group manager,
and are approved by the Quality Control team leader.
2. Test case design
Test case design starts when user scenarios are selected for implementation reviewed and
approved.
Test case design ends when all user scenarios selected for the implementation are covered
by the test cases. The test case design process is exited when all the test cases are
reviewed and approved by the Quality Control team leader.
3. Automation
The automation process starts when no major changes in the user model and interface of
the tested application are expected. Infrastructure of the subsystem automated test suite
should be completed. User scenarios selected for automation should be accessible for the
user.
Automation ends when all planned test scripts are finally implemented. A scope of the
automated test scripts should be available in the Testing Tools Implementation document.
Final implementation means that the scripts are reviewed by a senior test engineer and
approved by the Quality Control team leader.
4. Test cycle
See 1.4 above.
3. Web specific testing
1. Test requirements

Test requirements development starts when subsystem testing confirms the conceptual
validity of the application design (at least at the feature level). The final list of supported
Web browsers, servers and operating systems is provided by the product-marketing
manager.
Test requirements development ends when analysis of the Web specific application
aspects does not bring any additional testing points. Another criteria can be the completed
coverage of the Web testing checklist provided as a template with TestDirector 6.0.
The process is exited when requirements are reviewed and approved by Quality Control
team leader.
2. Test case design
Test case design starts when Web specific test requirements are approved. Another
condition for starting the design process is reliability of the application estimated by the
subsystem test execution. The mean time between failures should reach at least 30
minutes (average execution duration of a test scenario).
Test case design stops when all test requirements selected for testing are covered by the
test cases. The test case design process ends when all the test cases are reviewed and
approved by the Quality Control team leader.
3. Automation
The automation process starts when no major changes in the user model and interface of
the tested application are expected. Infrastructure of the subsystem automated test suite
should be completed. Test cases selected for automation should be accessible to the user.
Automation stops when all planned test scripts are finally implemented. The scope of the
automated test scripts should be available in the Testing Tools Implementation
document. Final implementation means that the scripts are reviewed by a senior test
engineer and approved by the Quality Control team leader.
4. Test cycle
See 1.4 above.

Test Suspension Criteria/Resuming Requirements


A test case is suspended if an executed step failed in such way that it is impossible to
proceed to any following step in the test. The suspended test case can be resumed when
the reason for suspension is eliminated. The resumed test case should be executed again
from the beginning.
A test cycle should be suspended if:
defects are disabling access to the major functionality of the tested application.
the frequency of the high priority failures is less than 15 minutes per failure.
a defect requires major design changes of the tested application.
Test cycle can be resumed if:
The defects that suspended the cycle execution are fixed and verified.
R&D has evidence that mean time to failure is more than 15 minutes.
All design changes are verified and approved, and the test plan including the cycle
is updated according to these changes.

Test Process Deliverables

1. Subsystem testing
1. Feature evaluation report.
This document should include comments on the conceptual module of the feature. It
covers usability issues and compatibility of the feature with the rest of the system
2. List of corrective actions.
This document should describe required changes in the feature design or implementation.
3. Test requirements list.
A set of test requirement documents in HTML or Microsoft Word formats. The test
requirements addressing the same feature should be grouped together either in one
document or using the Word/HTML hyperlinks mechanism.
4. Test cases design
A detailed manual test plan in the TestDirector repository.
5. WinRunner test suite
A set of WinRunner test scripts structured and stored according to the Test Standards
document. The tests should be registered in the TestDirector repository.
6. Test cycle
A test set in TestDirector, which includes run results with the links to the discovered
defects.
2. System testing
1. User scenarios
A set of manual tests without steps in the TestDirector repository. The description of each
test should describe the user profile and the source of the scenario (e.g. reference to the
corresponding tutorial chapter).
2. Test cases design
Detailed manual test plan in the TestDirector repository
3. WinRunner test suite
A set of WinRunner test scripts structured and stored according to the Test Standards
document. The tests should be registered in the TestDirector repository.
4. LoadRunner test suite
A set of LoadRunner scenarios and Web virtual users built according to demo and
training materials and registered in TestDirector.
5. Test cycle
A test set in TestDirector, which includes runs results with the links to the discovered
defects.
3. Web specific testing
1. Test requirements list
A set of test requirement documents in HTML or Microsoft Word formats. Test
requirements addressing the same feature should be grouped together either in one
document or using the Word/HTML hyperlinks mechanism.
2. Test cases design
A detailed manual test plan in the TestDirector repository.
3. WinRunner test suite
A set of WinRunner test scripts structured and stored according to the Test Standards
document. The tests should be registered in the TestDirector repository.
4. Test cycle
A test set in TestDirector that includes run results with the links to the discovered defects.

4. Build notification
A description of the software passed to quality control for validation. It includes the build
objective, summary of the changes, points of interest for quality control and a detailed list
of the changes from the configuration management system.
5. Product status
A periodic summary of the product status including a general impression, a list of the
major problems and test process progress data.
6. Release status
Release documentation is described in the Release Procedure document.

Test Environment
Hardware
The testing will be done using the following PC configuration.
A PC for each team member (type A)
A server (both file and database)
A Web server
2 PCs for automated test execution and configuration testing (type B)
These PCs should have replaceable drawers. There should be the ability to fill the
drawers with a clean environment configuration that is described in the paragraph below.
All PCs should be connected by LAN based on the protocol TCP/IP.
Workstations should have the following hardware configuration:
CPU Pentium II 266 MHz
RAM 64 MB
Hard disk 4 GB
Servers should have the following hardware configuration:
CPU Pentium II 333 MHz
RAM 256 MB
Hard disk 9 GB
A Web server should be accessed via a firewall Nokia IP 330. The firewall should be
configured using the Security requirements specification, which is a part of the project
documentation.

Software
Two type A PCs and one type B PC should have dual boot including Windows 98 and
Windows NT in the following configurations:
OS

Build

Service Pack

Windows NT 4.0

1381

SP5

Windows 98

1998, 2nd edition

The rest of type A PCs and one type B PC should have dual boot including Windows 95
and Windows NT in the following configurations:
OS

Build

Service Pack

Windows NT 4.0

1381

SP4

Windows 95

950

MS SQL 6.5 service pack 4 should be installed on the server machine.


WinRunner, LoadRunner and TestDirector 6.0 should be installed on the file server. All
PCs should have a workstation installation of these tools. The WinRunner and
TestDirector installation and configuration details can be found in the Tools
Implementation document.
The Web server PC should have NT 4.0 Service Pack 4 and Microsoft IIS 4.0
installations.

Roles and Responsibilities


1. WinRunner testing team
This team provides overall management of the testing process. It is responsible for the
design, implementation and analysis of all testing activities.
1. Quality Control Team Leader
The Quality Control team leader is responsible for monitoring and initiating the testing
activities according to the Master Test Plan. The team leader is also responsible for
maintaining and using the testing tools across the project.
2. Test Analyst
The Test Analyst is responsible for all steps related to test design.
3. Automation Expert
The Automation Expert is responsible for designing and implementing the testing tools.
4. Web Expert
The Web Expert is responsible for designing and implementing the web specific tests.
5. Senior Test Engineer
The Senior Test Engineer is responsible for implementing test cases and executing tests.
2. WinRunner Core and LoadRunner Web R&D teams
These teams provide a product to be tested and correct discovered defects. R&D is
responsible for any debugging activity.
3. System Administration group
This team is responsible for setting up the test environment. It supplies the testing team
with the drawers containing a clean environment when it is required by the test cycle.
4. Education group
This team provides the testing team with the tutorial and the training program based on
the Flights application.
5. Management team

This team provides the testing project with the work plan, procedures and guidelines. It is
responsible for monitoring progress and how corrective actions are implemented.
1. WR Core and LoadRunner Web R&D Managers
The WinRunner Core and LoadRunner Web R&D Managers provide the testing team
with the project schedule. They monitor the Defect Tracking System and assign tasks for
correcting defects. They also approve the testing process deliverables.
2. Quality Control Manager
The Quality Control Manager approves the completion of the testing process stages and
deliverables of these stages. In addition the Quality Control Manager coordinates interteam communication.

Staffing and Training Needs


The testing team should include:
1 Quality Control team leader
1 senior test engineer (should also have test analyst skills)
1 automation expert
1 Web expert
1 junior test engineer
There should be at least one system administrator available to handle the requests of the
testing team. The Web expert may be borrowed from the WebTest Add-in testing group.
All team participants should be familiar with the testing tools.

Schedule
This section chronologically lists the activities and milestones of the testing process. The
project will start on June 6, 1999. The project deadline (release to manufacturing of
WinRunner and LoadRunner 6.0) is September 15, 1999.
The schedule takes into account that part of the subsystem and system tests should be
repeated before the release, because WinRunner and LoadRunner can change since the
last cycle.
ID

Task

Responsibility

Start
Date

Duration
(days)

Subsystem test

Planning

Master Test Plan

QC test leader

6-Jun-99

Features evaluation

QC team leader

9-Jun-99

Test requirements development

Senior test engineer

10-Jun99

Test case design

Senior test engineer

14-Jun99

Test case implementation

Test engineer

15-Jun99

QC manager

17-Jun99

Automation expert

16-Jun99

Automation expert

20-Jun99

QC team leader

23-Jun99

0.5

6
Detailed test plan review

Automation

7
Test Suite design

8
Test Suite implementation

9
Test Suite review

Execution

10

Cycle preparation

QC team leader

25-Jul-99

0.5

11

Cycle 1

Senior test engineer

26-Jul-99

12

Cycle 2

Senior test engineer

1-Aug-99

13

Subsystem test review meeting

QC manager

9-Aug-99

0.5

14

Cycle 3

Senior test engineer

5-Sep-99

Web specific tests

Planning

15

Test requirements development

Web expert

20-Jun99

16

Test case design

Senior test engineer

21-Jun99

17

Test case implementation

Senior test engineer

22-Jun99

Execution

18

Cycle preparation

QC team leader

2-Aug-99

0.5

19

Cycle 1

Senior test engineer

3-Aug-99

20

Cycle 2

Senior test engineer

8-Aug-99

System test

Planning

21

User scenarios selection

Senior test engineer

18-Jul-99

22

Test cases design

Senior test engineer

19-Jul-99

0.5

23

Test cases implementation

Senior test engineer

20-Jul-99

24

Detailed test plan review

QC team leader

22-Jul-99

0.5

Automation expert

21-Jul-99

0.5

Automation expert

22-Jul-99

Automation

25
Test Suite design

26
Test Suite implementation

27

QC team leader

26-Jul-99

0.5

Test Suite review

Execution
28

Cycle preparation

QC team leader

8-Aug-99

0.2

29

Cycle 1

Senior test engineer

8-Aug-99

30

Cycle 2

Senior test engineer

11-Aug99

24

System test review meeting

QC manager

12-Aug99

0.5

25

Cycle 3

QC team leader

5-Sep-99

26

Release

QC team leader

8-Sep-99

0.5

Dependencies between the tasks are defined in the Entrance/Exit Criteria paragraph.

Problem and Issues Management


All application failures detected during the testing process should be reported to the
existing Defect Tracking System implemented using TestDirector for WinRunner and
LoadRunner testing projects. If the developer responsible for correcting the defect and
the Quality Control team leader agree that several reported failures are caused by the
same fault, duplicate failure reports can be closed. This fact should be registered in the
failure report remaining in the Defect Tracking System.
All suggestions, limitations and missing functions should be reported and classified in the
Defect Tracking System.
A detailed defect classification and defect tracking procedure is available in the Defect
Tracking Guidelines document.

Risks and Contingencies


We can expect frequent changes to the application following multiple requests from
the field and new technologies supported by Mercury Interactive tools. In this case
we should extend the feature evaluation stage and reduce the depth of the test
requirement analysis. We will also generalize detailed tests by eliminating some
steps that are too specific. The stability notion serving as criteria for the different
entrance/exit stages should be reviewed in order to deal with the frequent changes.

If a new ODBC version is released, compliance of the Flights application to this version
should be verified and positive or negative results should be mentioned in the release
notes. Compliance verification will require additional resources that must be provided by
the Customer Support Organization (CSO).
If tutorial or training materials are not available during the system-testing phase, the
technical documentation and education team will need to provide the user scenarios for
the Quality Control team.
If the application is available to the customers in addition to sales engineers, scalability
problems may raise. We can include scalability testing as part of the operational testing.
The results will define our approach to this risk.

Edit this page (if you have permission) |


Google Docs -- Web word processing, presentations and spreadsheets.

Das könnte Ihnen auch gefallen