Beruflich Dokumente
Kultur Dokumente
Test Automation
Enterprise Testing
Methodology (ETM)
Date
Project name
Authors
Document version
Disposition {draft/final}
DISPOSITION:
Table of Contents Draft for discussion purposes only
Authors
This document was prepared by:
John Doe, Release Test John Doe, SIT Test manager John Doe, UAT Test
manager manager
Revision History
The softcopy of the most current version of this testing strategy plan can be found file location.
Approval List
Table of Contents Draft for discussion purposes only
The following stakeholders agree that the information reported in this testing strategy plan is accepted
and approved. The information in this document may be used in test preparation and test execution.
Distribution
The softcopy of the most current version of this testing strategy plan has been distributed to the following
individuals.
Documentation
The following documentation was utilized to create this testing strategy plan
Table of Contents
Revision History 2
Approval List 3
Distribution 3
Documentation 3
1. Introduction 6
1.1. Purpose 6
1.2. Intended audience 6
1.3. Assumptions 6
3.1. Overview 12
3.2. HP's ALM Automation Center 12
3.2.1. Requirements 12
3.2.2. Test Script Development 12
3.2.3. Test Execution 13
3.2.4. Variances 13
3.2.5. Test Script Automation 13
3.2.6. Service-Oriented Architecture Testing 13
3.2.7. Business Process Testing 14
<The table of contents above is matched to the section outline below. No additions or modifications
should be required on this page. To update the table of contents to reflect current page numbers and
modified section titles, place the mouse pointer in the table of contents and right click. Select Update
Field, select Update entire table, and click OK. >
Draft for discussion purposes only
1. Introduction
1.2. Purpose
The purpose of this document is to recommend the automation tools (HP Unified Functional Testing),
leading practices, methodologies, data, resources, and training needs to achieve test automation
objectives, as well as provide a road map to meeting those objectives.
The scope of this document includes all current test automation tools and practices in use, as well as
additional tools and practices recommended to successfully meet test automation goals.
1.4. Assumptions
This document assumes the use of an automated test tool to implement test automation. Within this
document, examples and detail are derived from HP's ALM Automation Center for illustrative
purposes. However, other tools to automate test management and enable automated test script
creation may be used, such as IBM Rational TestManager and IBM Rational Robot.
In an age of accelerating product lifecycles and restrictive cost pressures, the impact of the traditional
approach to testing has had serious consequences that have put IT organizations between a “rock and a
hard place”. In today's business climate they are asked to do more with less and expected to deliver
higher quality systems in less time with fewer resources. Additionally, when corporations tighten their
budgetary belt, software testing is often the first systems-development item to be cut. This puts IT in a
bind. One way out of this bind is to incorporate test automation practices within the application lifecycle.
Test automation can greatly reduce testing cycle times, especially during regression testing. It also
reduces costs and effort, and supports iterative development and more frequent releases.
Test automation, however, cannot be done overnight. It takes a great deal of time and effort to build and
maintain the test automation tools that drive the process and contribute to the results. Additionally,
modular or ‘data-driven’ test automation frameworks do not produce immediate benefits. Industry
research has shown that it takes at least two or three major releases to recover the effort expended in the
initial automation setup and script creation.
The longer-term cost savings for implementing test automation are indicated in the diagram below. This
diagram shows that test automation can produce up to an 80% cumulative savings over eight or more
software releases.
Test Automation and Cumulative Test Execution Savings as per Survey by PwC and HP
Test automation can yield significant savings in the time it takes to test a new application release. For
low-complexity projects, test automation can result in a 91% reduction in testing time. For medium-
complexity projects, test automation can reduce the test planning and execution time by up to 85%.
For high complexity projects, work testing time can be reduced by 77%. Also, test automation
combined with accelerating iterative development will enable the client to release more software
releases more frequently.
A ‘data-driven’ test automation framework should be implemented in order to take advantage of the
cost-savings and increased application quality benefits that are made possible through automation.
Similar organizations that have implemented a ‘data-driven’ test automation framework have
experienced automated tests that were able to produce savings of more than 80% in time and cost
when compared with manual testing, once the automation framework was built.
A test management automation tool to facilitate test planning, execution and variance
remediation
A process to develop test artifacts which can be reused for various projects
These objectives are achieved through the use of a test automation framework which allows for the
development and use of modular test scripts which are easily maintainable and that can be leveraged
across several projects. The following sections will describe automation tool use and the ‘Data-
Driven’ testing methodology which is recommended to achieve the automation objectives discussed
above.
The test design process and the test automation framework must be developed as separate
entities
The test strategy/design must hide the complexities of the test framework from testers
For each test phase, the types of testing that will be conducted should be described. In summary,
this description should describe the rationale of why the test phases/test types were specified as
opposed to other ways, as well as how this overall approach will meet the quality objectives. >
The following test stages are …
3.3.1. Requirements
The ‘Requirements’ module of ALM provides the test team with a repository to develop and review
requirements for the system being developed. This repository is structured using a hierarchical
parent-child tree. The lowest “child” in this tree should contain the simplest use case object ID, in
business rule or supplemental specification form.
Testers should review these requirements and evaluate them for testability, i.e. each business rule or
supplemental specification can be explicitly passed or failed by performing a single system activity.
Requirements can be reviewed and approved as part of a quality gate within s. After approval, testers
can leverage the ‘Generate Tests’ functionality within this module. This function automatically builds
a prototype for all tests to be designed in the ‘Test Plan’ module with the same use case and object ID
structure as defined in the Requirements module.
The ALM Requirements module serves as a repository for business requirements only. It does not
provide any requirements generation or requirements management functionality offered by other
tools. It is recommended that additional tools be acquired for use to streamline the requirements
management process and integrate the business team earlier in the automated test management
lifecycle.
The scripts will be reviewed by business and development teams to ensure that the combination of
design steps and data pools will test all requirements. After the test scripts have been reviewed and
approved, the test analyst can map each of the design steps to the requirements in the Requirements
module.
The Test Plan module integrates directly with Automation Center, enabling testers to develop
automated test scripts right in ALM.
3.3.4. Variances
As the testing team executes the test scripts, actual results are compared against the expected results
and pre-determined acceptance criteria. Any discrepancy between the actual and expected results will
be considered a variance and will be tracked, investigated and remedied using the “Defects” module.
The Defects module is workflow-based, enabling business, development and testing users to
collaborate, view and track variances through the entire remediation process.
4. Automation Methodology
and Process
4.2. Automation Approach
The ‘Data-Driven’ testing framework is the foundation of a successful test automation effort. This
framework requires the test lead to analyze the application(s) to be tested and design the test
scripts in a fashion that would allow them to be modular and re-usable across multiple releases,
while satisfying the criteria of being “data-driven” as described in Section 5.1.1 below.
data values. Multiple combinations of data types can be defined in the data pool for test
execution. Practically speaking, the data pool may be a Microsoft Excel spreadsheet with columns
for the data type, data input value, interim values, and output data values or expected results.
Technical factors:
o Stability of the component
o Scripting difficulty
Note: It is very important to remember that sometimes not automating saves as much time as
automating.
With the effort required to develop and maintain automated scripts, baseline automation test
data, and the instability of system functionality, it is recommended that we target 40% of our test
scripts to be automated.
Level 1: Record and Playback – This automation will be used to perform “smoke tests” for
newly delivered builds before test execution begins
Level 2: Modularized/Componentized Scripting – This automation will be used to perform
functional testing. The automation scripts will be designed to test the smallest clusters of
functionality, with larger “end-to-end” scripts being constructed from the script repository to
be used for the ‘Executing’ phase. This provides flexibility of the automation suite to adapt to
changes in the application, reduces test script maintenance, and increases automated test
coverage
Level 3: Technical Testing – This automation will be used to perform system/services
interface testing (XML messages) and database validation (SQL via ODBC)
To accomplish these automation tasks, the effort will need to focus on two work streams in the
short-term:
1) Integrating the test script automation process to be performed in parallel with the release
functional testing cycle
2) Automating the selected regression test scripts that currently exist for past releases
The automation methodology described in this section will be utilized for both work streams. The
first automation effort will initiate at the conclusion of the latest release and will consist of
automating all possible regression test scripts from past releases.
This effort will be performed in segments, targeting an overall regression test suite percent
complete at the end of each segment. Over time, we will achieve the overall regression suite target
automation state, enabling the automation effort for the following release to be conducted in
parallel with functional testing activities for that same release under test. Both work streams will
utilize the automated test script selection criteria provided within this document.
Activities
sync over
time
For the best results, the test automation effort will be considered a standalone project and will
adhere to systems development lifecycle phases: Preparation, Planning, Development, Execution,
and Transition.
Note: “Functions” can be called from any of the above script types.
Driver Script
Main Scripts
Subroutine
Business Scripts
Function Scripts
BFS BFS
Results to Test
Manager/Test Director
A ‘Business Function’ and ‘Subroutine’ data-driven automated test script could be written as follows:
‘Payment’ Function
1. Start at Main Menu
2. Invoke a “Screen Navigation Function” to access the ‘Payment’ Screen
3. Read a data file containing specific data to enter for this test and input the data
4. Press the button or function-key required to ‘Post’ the payment
5. Read a data file containing specific expected results data
6. Compare this data to the data which is currently displayed in the conformation screen
7. Write any discrepancies to an error report
8. Press button or key required to return to Main Menu or, if required, invoke a “Screen
Navigation Function” to do this.
The ‘Driver’ script would set the test environment, load the user-defined functions, and then call the
test case script, which invokes the “Business Function” and “Subroutine” to perform the above test.
The test script would call these two scripts the number of times required to perform all the required
test cases’ data scenarios of this kind. In each case, the only thing that changes are the data contained
in the files that are read and processed by the “Business Function” and “Subroutine” scripts.
Using this method, if we needed to process 50 different kinds of payments in order to verify all of the
possible conditions, then we would need only 4 scripts which are re-usable for all 50 cases:
the “Driver” script,
the “Test Script” (Post a Payment & Verify Results) script,
the “Payment” Business Function script, and
the “Verify Payment Summary & Transaction Details” Subroutine script.
It should be noticed that the “Subroutine” script, which verifies the payments summary and
transaction details, can also be used by other test cases and business functions (which is why it is
classified as a “Subroutine” script rather than a “Business Function” script) – for example payment
reversals, adjustments and forecasts. Also, if different accounts need to be used, then all that needs to
be done is to update the data pools and not the actual scripts.
The graphic on the following page indicates how the framework for these test scripts will be modeled.
function in the script, and the subroutine script performs the multiple tasks of the business
function scripts.
Record size
Dataset profile