Sie sind auf Seite 1von 26

Advisory Draft for discussion purposes only

Test Automation
Enterprise Testing
Methodology (ETM)

Date

Project name

Authors

Document version

Disposition {draft/final}

DISPOSITION:
Table of Contents Draft for discussion purposes only

Authors
This document was prepared by:
John Doe, Release Test John Doe, SIT Test manager John Doe, UAT Test
manager manager

CIO group CIO group CIO group


Address Address Address
Toronto, on Toronto, on Toronto, on
T: 416.364.5000 T: 416.364.5000 T: 416.364.5000
f: 416.364.5000 f: 416.364.5000 f: 416.364.5000
john.doe@abc.ca john.doe@abc.ca john.doe@abc.ca

Revision History
The softcopy of the most current version of this testing strategy plan can be found file location.

Version Revision Date Summary of Changes Document Author


Number Mm/dd/yyyy

Approval List
Table of Contents Draft for discussion purposes only

The following stakeholders agree that the information reported in this testing strategy plan is accepted
and approved. The information in this document may be used in test preparation and test execution.

Distribution
The softcopy of the most current version of this testing strategy plan has been distributed to the following
individuals.

Department Name Title

Documentation
The following documentation was utilized to create this testing strategy plan

Document name Document Location


Table of Contents Draft for discussion purposes only

Table of Contents
Revision History 2
Approval List 3
Distribution 3
Documentation 3

1. Introduction 6

1.1. Purpose 6
1.2. Intended audience 6
1.3. Assumptions 6

2. Test Automation Overview 7

2.1. Automation History 7


2.2. Automation Objectives 8
2.3. Critical Success Factors 9
2.4. Test Phases Versus Test Types 9
2.5. Strategic Objectives 10
2.6. Test Management Automation 10
2.7. Test Script Automation 11

3. Test Automation Tools 12

3.1. Overview 12
3.2. HP's ALM Automation Center 12
3.2.1. Requirements 12
3.2.2. Test Script Development 12
3.2.3. Test Execution 13
3.2.4. Variances 13
3.2.5. Test Script Automation 13
3.2.6. Service-Oriented Architecture Testing 13
3.2.7. Business Process Testing 14

4. Automation Methodology and Process16

4.1. Automation Approach 16


4.1.1. Data-Driven Testing 16
Table of Contents Draft for discussion purposes only

4.2. What Test Scripts Do We Automate and Why? 17


4.3. Automation Methodology 18
4.4. Automated Test Script Development21
4.5. Automated Script Framework 22
4.6. Automated Test Script Example 23
4.7. Automation Test Script Design and Flow 25
4.8. Building the Regression Test Suite 26
4.9. Automated Testing Data 26
4.10. Automated Testing Data Leading Practices 27
4.10.1. Service-Oriented Architecture Testing 27

<The table of contents above is matched to the section outline below. No additions or modifications
should be required on this page. To update the table of contents to reflect current page numbers and
modified section titles, place the mouse pointer in the table of contents and right click. Select Update
Field, select Update entire table, and click OK. >
Draft for discussion purposes only

1. Introduction
1.2. Purpose
The purpose of this document is to recommend the automation tools (HP Unified Functional Testing),
leading practices, methodologies, data, resources, and training needs to achieve test automation
objectives, as well as provide a road map to meeting those objectives.
The scope of this document includes all current test automation tools and practices in use, as well as
additional tools and practices recommended to successfully meet test automation goals.

1.3. Intended audience


This document is directed towards all members who will be directly involved with the testing efforts
for any products and services. Recipients of this document should read it in its entirety and bring any
issues and/or questions back to their respective Test Automation Lead.

1.4. Assumptions
This document assumes the use of an automated test tool to implement test automation. Within this
document, examples and detail are derived from HP's ALM Automation Center for illustrative
purposes. However, other tools to automate test management and enable automated test script
creation may be used, such as IBM Rational TestManager and IBM Rational Robot.

PwC - Test Automation - Enterprise Testing Methodology (ETM)


6
Draft for discussion purposes only

2. Test Automation Overview


2.2. Automation History
In 2002, AMR Research reported that software defects cost businesses $60 billion to fix and 50% of
software costs were related to finding and fixing those defects. Unfortunately, that is just the out-of-
pocket cost. IT systems that do not solve real business problems or do not perform as promised impose a
similar economic toll on business costs and results. More than half of all software projects fail to meet
objectives or suffer significant schedule and budget slippage because defects are discovered too late.

In an age of accelerating product lifecycles and restrictive cost pressures, the impact of the traditional
approach to testing has had serious consequences that have put IT organizations between a “rock and a
hard place”. In today's business climate they are asked to do more with less and expected to deliver
higher quality systems in less time with fewer resources. Additionally, when corporations tighten their
budgetary belt, software testing is often the first systems-development item to be cut. This puts IT in a
bind. One way out of this bind is to incorporate test automation practices within the application lifecycle.
Test automation can greatly reduce testing cycle times, especially during regression testing. It also
reduces costs and effort, and supports iterative development and more frequent releases.

Test automation, however, cannot be done overnight. It takes a great deal of time and effort to build and
maintain the test automation tools that drive the process and contribute to the results. Additionally,
modular or ‘data-driven’ test automation frameworks do not produce immediate benefits. Industry
research has shown that it takes at least two or three major releases to recover the effort expended in the
initial automation setup and script creation.

The longer-term cost savings for implementing test automation are indicated in the diagram below. This
diagram shows that test automation can produce up to an 80% cumulative savings over eight or more
software releases.

Test Automation and Cumulative Test Execution Savings as per Survey by PwC and HP

PwC - Test Automation - Enterprise Testing Methodology (ETM)


7
Draft for discussion purposes only

Test automation can yield significant savings in the time it takes to test a new application release. For
low-complexity projects, test automation can result in a 91% reduction in testing time. For medium-
complexity projects, test automation can reduce the test planning and execution time by up to 85%.
For high complexity projects, work testing time can be reduced by 77%. Also, test automation
combined with accelerating iterative development will enable the client to release more software
releases more frequently.

As per Cognizant Technologies Survey

A ‘data-driven’ test automation framework should be implemented in order to take advantage of the
cost-savings and increased application quality benefits that are made possible through automation.
Similar organizations that have implemented a ‘data-driven’ test automation framework have
experienced automated tests that were able to produce savings of more than 80% in time and cost
when compared with manual testing, once the automation framework was built.

2.3. Automation Objectives


Contrary to popular belief, the objective of automation is not just to develop automated scripts to
perform repetitive test execution tasks that occur from release to release. Automated test scripts
developed strictly by using a capture tool can become difficult and costly due to the maintenance and
reliability issues that emerge as the result of any subtle change to the system interface if sufficient
planning is not performed for automation. The challenges experienced with automated test script
development have prompted test professionals to further define “test automation.”

A truly automated testing shop consists of:

 A test management automation tool to facilitate test planning, execution and variance
remediation

 A process to develop test artifacts which can be reused for various projects

 A suite of automated test scripts used to execute regression testing

These objectives are achieved through the use of a test automation framework which allows for the
development and use of modular test scripts which are easily maintainable and that can be leveraged
across several projects. The following sections will describe automation tool use and the ‘Data-
Driven’ testing methodology which is recommended to achieve the automation objectives discussed
above.

PwC - Test Automation - Enterprise Testing Methodology (ETM)


8
Draft for discussion purposes only

2.4. Critical Success Factors


The following are critical success factors important to test automation:

 Test automation must be implemented as a full-time effort, not a “sideline”

 The test design process and the test automation framework must be developed as separate
entities

 The test framework must be application independent

 The test framework must be easy to expand, maintain, and enhance

 The test strategy/design vocabulary must be framework independent

 The test strategy/design must hide the complexities of the test framework from testers

2.5. Test Phases Versus Test Types


Test Phase
Ref # Test Types Unit DIT SIT UAT PAT Other
1 Code Testing  
2 Functional Positive Testing    
3 Negative Testing   
4 Performance Testing
5 Load Testing
6 Availability Testing
7 Response Testing
8 Reliability Testing
9 Regression Testing   
10 Security Testing
11 Penetration Testing
12 User Verification Testing   

For each test phase, the types of testing that will be conducted should be described. In summary,
this description should describe the rationale of why the test phases/test types were specified as
opposed to other ways, as well as how this overall approach will meet the quality objectives. >
The following test stages are …

PwC - Test Automation - Enterprise Testing Methodology (ETM)


9
Draft for discussion purposes only

2.6. Strategic Objectives


The following strategic automation objectives are based on the critical success factors section:
 Implement a strategy that will allow tests to be developed and executed both manually (initial
test cycle) and via an automation framework (regression test cycles).
 Separate test design and test implementation to allow test designers to concentrate on
developing test requirements, test planning, and test case design while test implementers
build and execute test scripts.
 Implement a testing framework that both technical and non-technical testers can use.
 Employ a test strategy that assures that test cases include the navigation and execution steps
to perform, the input data to use, and the expected results all in one row or record of the input
data source.
 Implement an application-independent test automation framework.
 Document and publish the framework.
 Develop automated build validation (smoke) tests for each release of the application.
 Develop automated environmental setup utility scripts for each release of the application.
 Develop automated regression tests for:
 GUI objects and events
 Application functions
 Application special features
 Application reliability
 Application compatibility
 Database verification
 API Validation

2.7. Test Management Automation


Test Management Automation is the automated processes that are used to assist test leadership with
the planning, executing, tracking and reporting of a project testing effort. Test Management
Automation is performed with the help of a test management software tool, which contains a
repository to store requirements, test scripts and variances. This enables the test team to develop and
maintain test scripts, build execution flows, perform traceability between test scripts and
requirements, log and track variances, and report metrics for a particular testing project. The tool
provides a single interface for business, development and testing teams to facilitate test planning,
coverage and variance remediation. Examples of Test Management Automation tools include HP's
ALM and IBM Rational TestManager.

PwC - Test Automation - Enterprise Testing Methodology (ETM)


10
Draft for discussion purposes only

2.8. Test Script Automation


Test Script Automation is defined as the coding of test steps and expected results within a software
tool enabling automatic execution and validation of test scripts against business requirements. Test
script automation tools typically revolve around “record/playback” or “capture” functionality, where
the test automation engineer engages the tool then navigates through the objects and features of the
system under test using pre-defined parameters. The tool compares system responses with an
expected results file that has been hard-coded by the test automation engineer. Automated test
scripts can also be used to perform database file compares for Extract, Transform, and Load (“ETL”)
and other SQL-type transactions.

PwC - Test Automation - Enterprise Testing Methodology (ETM)


11
Draft for discussion purposes only

3. Test Automation Tools


3.2. Overview
Tools such as HP's ALM's can be used to build a software testing framework. The next sections
provide information on the ALM tools, their use, and the role they play in ‘data-driven’ test
automation.

3.3. HP's ALM Automation Center


HP's ALM Automation Center is a test management automation tool containing different “modules”
enabling requirements traceability, test script development and test execution. These modules can be
accessed at different points in the software development lifecycle by the test team, business team and
development team, to execute and track the project testing effort.

3.3.1. Requirements
The ‘Requirements’ module of ALM provides the test team with a repository to develop and review
requirements for the system being developed. This repository is structured using a hierarchical
parent-child tree. The lowest “child” in this tree should contain the simplest use case object ID, in
business rule or supplemental specification form.

Testers should review these requirements and evaluate them for testability, i.e. each business rule or
supplemental specification can be explicitly passed or failed by performing a single system activity.

Requirements can be reviewed and approved as part of a quality gate within s. After approval, testers
can leverage the ‘Generate Tests’ functionality within this module. This function automatically builds
a prototype for all tests to be designed in the ‘Test Plan’ module with the same use case and object ID
structure as defined in the Requirements module.

The ALM Requirements module serves as a repository for business requirements only. It does not
provide any requirements generation or requirements management functionality offered by other
tools. It is recommended that additional tools be acquired for use to streamline the requirements
management process and integrate the business team earlier in the automated test management
lifecycle.

3.3.2. Test Script Development


The ‘Test Plan’ module is used by the test analysts to develop the test scripts used to validate business
requirements. These scripts are designed using the ‘Action-Result’ test script development
methodology, providing the user with the expected results of each step as they interact with system.
The test script should serve as the definitive test artifact and should be developed with data
parameters to test all applicable scenarios and boundaries. ‘Data pools’ will be constructed with all
variations of data being input into each parameter.

PwC - Test Automation - Enterprise Testing Methodology (ETM)


12
Draft for discussion purposes only

The scripts will be reviewed by business and development teams to ensure that the combination of
design steps and data pools will test all requirements. After the test scripts have been reviewed and
approved, the test analyst can map each of the design steps to the requirements in the Requirements
module.
The Test Plan module integrates directly with Automation Center, enabling testers to develop
automated test scripts right in ALM.

3.3.3. Test Execution


The ‘Test Lab’ module allows the test analyst to organize test scripts into test execution sequences
called ‘Test Sets’. These test sets are constructed by the test analyst to execute any combination of
manual and automated scripts to validate system functionality. Tests can be executed using the
‘Manual Run’ functionality and all execution results are captured and saved by the tool. In addition,
any variances discovered during test execution can be open as they are discovered, enabling test
design step information to be pre-populated into the variance.

3.3.4. Variances
As the testing team executes the test scripts, actual results are compared against the expected results
and pre-determined acceptance criteria. Any discrepancy between the actual and expected results will
be considered a variance and will be tracked, investigated and remedied using the “Defects” module.
The Defects module is workflow-based, enabling business, development and testing users to
collaborate, view and track variances through the entire remediation process.

3.3.5. Test Script Automation


HP's ALM Automation Center is a tool used to develop automated test scripts. The foundation of
Automation Center is Unified Functional Testing (UFT) and is underpinned by the use of ‘Record’
functionality that enables the automated test engineer to capture application validation checkpoints.
The test engineer can build data tables in Microsoft Excel which can be customized to generate all
expected results to obtain complete test coverage. With the use of the ‘Record’ function, Automation
Center does not require significant technical knowledge to develop automated test scripts. However,
for advanced debugging of test scripts, the automated test scripts can be viewed using the built-in
VBScript editing tool. This capability allows for further customization of automated scripts.
The automated test script effort will be focused on building out the regression testing suite. Please
refer to Section 5.5 of this document for more information about the regression suite.

3.3.6. Service-Oriented Architecture Testing


New tools are emerging to conduct automated service-oriented architecture (SOA) testing such as
Solstice’s Integra Suite and HP's ALM Performance Center tools. These tools enable test analysts to
validate messaging structure, content, and path between service layers, and also provide stubbing and
simulating of services that are unavailable or have not been developed.
ALM product suite has built into it the Performance Center, aka LoadRunner tool, enabling the
validation of both performance and functional testing of system services.

PwC - Test Automation - Enterprise Testing Methodology (ETM)


13
Draft for discussion purposes only

3.3.7. Business Process Testing


HP's ALM has a module available called ‘Business Process Testing’, which allows business and test
analysts to get involved in automated test design without the need for any programming experience.
HP defines this module as “(the) complete system for functional test case design for both automated
and manual testing. It enables non-technical subject-matter experts to become an integral part of the
quality optimization process, automates the creation of test-plan documentation, and streamlines test
maintenance for application change.” This means that business or functional experts can define high-
level test flows using this web-based test design module which integrates completely with ALM. The
‘Business Process Testing’ module should be considered for use in projects once the ‘data-driven’
testing framework has been implemented.

PwC - Test Automation - Enterprise Testing Methodology (ETM)


14
Draft for discussion purposes only

4. Automation Methodology
and Process
4.2. Automation Approach
The ‘Data-Driven’ testing framework is the foundation of a successful test automation effort. This
framework requires the test lead to analyze the application(s) to be tested and design the test
scripts in a fashion that would allow them to be modular and re-usable across multiple releases,
while satisfying the criteria of being “data-driven” as described in Section 5.1.1 below.

4.2.1. Data-Driven Testing


A Data-Driven approach to test planning supports the reduction in the number of test scripts that
need to be developed, and focuses instead on the number of test executions using the test scripts
and their associated list of data inputs. The test script development planning is also used to
leverage the keyword-driven test design, with business and test analysts identifying the ‘keyword’
utility scripts to be developed that can also be leveraged by the development team.
The Data-Driven approach focuses the test analyst’s efforts in creating the steps necessary to
navigate through the system, enter data, and trigger a system response only once, with the
majority of the effort going into identifying what data needs to be entered to validate the system
using the identified test script steps. This data is identified and consolidated into a ‘data pool’,
which is also reviewed as part of the test script reviews with the business team. To support the
use of the data pool, a ‘calculation engine’ is also created where applicable. The calculation engine
supplies the test analyst with the interim and final data calculations, and is used to generate the
expected results of a specific data pool scenario. A more detailed explanation of these artifacts is
provided below.

4.2.1.1. Data Pools


The data to be used in the creation of test scripts is defined in data pools. Data pools supply data
values for the test execution effort at both a single test script and multiple script or suite level.
Data pools contain the description of the input data (data type); the actual value of the data type
(data input), interim data values generated by the system (if any), and output data values
(expected results). There could be many input data values – variations and combinations of data
values that exercise the system functionality, which in turn leads to greater comfort in the overall
quality of the system being developed.
Data pools are not the transactional databases, shared databases, or mainframe datasets used by
the applications under test. Data pools simply define which values in the system databases the
test analyst needs to use as input and provide the expected outcome of that data input. These
values already exist in the test dataset or need to be created as part of the test data generation
exercise. A data pool is a testing artifact that is associated with a specific test script or test set,
and should contain both positive (data accepted/expected by the system) and negative (data not
accepted by the system) data sets.
At a minimum, a data pool contains one row and a maximum of N rows (where ‘N’ is defined by
the test analyst and business analyst), and each row includes a description of the data (data type),
the input data value, interim values generated by the system (if appropriate), and expected output

PwC - Test Automation - Enterprise Testing Methodology (ETM)


15
Draft for discussion purposes only

data values. Multiple combinations of data types can be defined in the data pool for test
execution. Practically speaking, the data pool may be a Microsoft Excel spreadsheet with columns
for the data type, data input value, interim values, and output data values or expected results.

4.2.1.2. Calculation Engines


A calculation engine is a spreadsheet that calculates the expected interim and/or output results
for any given specified set of input data. The complexity of the system demands that the tester
can verify calculations at all points in the test script. Typically, a calculation engine will be
developed during the test script development phase and can cover many test scripts.

4.3. What Test Scripts Do We Automate and Why?


The flexibility built into many technical products dictates that not all test scripts should be
automated. Each test script should be evaluated for its inclusion in the automated suite. The
following criteria should be used to identify test scripts for automation:
 Business factors:
o Criticality of the business functions
o High-path frequency

 Technical factors:
o Stability of the component
o Scripting difficulty

 Tests that are good to automate:


o Tests that we need to run for every build of the application
o Tests that use multiple values for the same actions (Data-driven)
o Identical tests that need to be executed
o Business critical processes /transactions
o Business processes that will not change

 Test not to automate:


o New or unstable functionality/features
o Business processes used for one time testing
o Ad-hoc/random testing
- Based on intuition and knowledge of the application

Note: It is very important to remember that sometimes not automating saves as much time as
automating.

PwC - Test Automation - Enterprise Testing Methodology (ETM)


16
Draft for discussion purposes only

With the effort required to develop and maintain automated scripts, baseline automation test
data, and the instability of system functionality, it is recommended that we target 40% of our test
scripts to be automated.

4.4. Automation Methodology


The recommended implementation for any projects new to test automation is to use a layered
approach, represented as follows:

 Level 1: Record and Playback – This automation will be used to perform “smoke tests” for
newly delivered builds before test execution begins
 Level 2: Modularized/Componentized Scripting – This automation will be used to perform
functional testing. The automation scripts will be designed to test the smallest clusters of
functionality, with larger “end-to-end” scripts being constructed from the script repository to
be used for the ‘Executing’ phase. This provides flexibility of the automation suite to adapt to
changes in the application, reduces test script maintenance, and increases automated test
coverage
 Level 3: Technical Testing – This automation will be used to perform system/services
interface testing (XML messages) and database validation (SQL via ODBC)

To accomplish these automation tasks, the effort will need to focus on two work streams in the
short-term:
1) Integrating the test script automation process to be performed in parallel with the release
functional testing cycle
2) Automating the selected regression test scripts that currently exist for past releases

PwC - Test Automation - Enterprise Testing Methodology (ETM)


17
Draft for discussion purposes only

The automation methodology described in this section will be utilized for both work streams. The
first automation effort will initiate at the conclusion of the latest release and will consist of
automating all possible regression test scripts from past releases.
This effort will be performed in segments, targeting an overall regression test suite percent
complete at the end of each segment. Over time, we will achieve the overall regression suite target
automation state, enabling the automation effort for the following release to be conducted in
parallel with functional testing activities for that same release under test. Both work streams will
utilize the automated test script selection criteria provided within this document.

Release 1 Release 2 Release 3 Release x

Activities
sync over
time

Release 1 Release 2 Release 3 Release x

For the best results, the test automation effort will be considered a standalone project and will
adhere to systems development lifecycle phases: Preparation, Planning, Development, Execution,
and Transition.

4.4.1.1. Preparation Phase


During the ‘Preparation’ phase, requirements for the system are analyzed to determine the
automation scope. To accomplish this, all external and internal entities of the system must be
identified and the nature of the system must be defined at the highest-level. This involves identifying
all requirements and describing any ambiguous requirements in detail. Also included in this task are,
identifying the success criteria, risk assessment, the estimated number of resources needed, and a
phase plan showing dates of major milestones.

PwC - Test Automation - Enterprise Testing Methodology (ETM)


18
Draft for discussion purposes only

The outcome of the ‘Preparation’ phase is:


 General vision of the automation requirements, key features to automate, and main
constraints
 The design of the automation model
 An initial risk assessment
 A project plan, showing phases and iterations of testing
 At least one prototype

4.4.1.2. Planning Phase


The purpose of the ‘Planning’ phase is to analyze the problem domain, establish automation
architecture, develop the project plan, and eliminate the highest risk elements of the project. To
accomplish these objectives, a broad view of the system to be automated must be obtained, as well as
an understanding of the end-to-end system functionality and interfaces: scope for automation, major
functionality and nonfunctional requirements which can be automated.
During the ‘Planning’ phase, an executable automation prototype is built for one or more
requirements, depending on the scope, size, risk, and unique attributes of the project.
The outcome of the ‘Planning’ phase is:
 Completed automation model — all requirements and functionality have been identified,
and most scenarios have been developed
 An executable prototype is developed
 A revised risk list and a revised business case are developed
 A development plan for the overall automation is completed

4.4.1.3. Development Phase


During the ‘Development’ phase, all of the remaining automated test scripts are developed in Unified
Functional Testing (UFT) and integrated into test scenarios. Enhancements are made to the existing
test scripts if required and all scripts are thoroughly unit tested. The test data bed is completed and
ready to use with automation scripts.
The outcome of the ‘Development’ phase is:
 Automation scripts are ready to execute for testing the application with minimum human
errors
 The automation scripts are integrated on the test repository in HP's ALM Automation Center
 Test scenarios are developed and reviewed by business analysts
 All of the test scripts are mapped to the requirements for traceability and test coverage
 Test data is created and reviewed by business analysts
 Roll-out of the scripts for use in the ‘Execution’ phase

PwC - Test Automation - Enterprise Testing Methodology (ETM)


19
Draft for discussion purposes only

4.4.1.4. Execution Phase


During the ‘Execution’ phase, all of the automated test scripts are executed and the results are logged
into HP’s ALM. Variances are communicated to the variance management team. Fixed variances are
retested and closed.
The outcome of ‘Execution’ phase:
 Achieving 100% script execution
 Closing all of the Severity 1 and Severity 2 variances before the ‘Transition’ phase
 Generating the traceability matrix for the test coverage
 Generate the test result documents

4.4.1.5. Transition Phase


The purpose of the ‘Transition’ phase is to transition the scripts and the knowledge. This phase
consists of finalizing execution documentation, training additional test analysts, supporting users in
their initial use, and evaluating user feedback. The ‘Transition’ phase is entered when a testing has
been completed to an acceptable level of quality and product deployed in the end-user domain.
The ‘Transition’ phase goals are:
 To achieve a final automation baseline as rapidly and cost effectively as practical
 Conduct ‘Lessons Learned’ sessions to be documented and used in future releases
 To conduct training for the remainder of the automated test team and applicable manual
test team members

4.5. Automated Test Script Development


The main concept behind automated test script development is to reduce all test scripts to their
most fundamental tasks and to write user-defined functions, business function scripts, and "sub-
routine" or "utility" scripts which perform these tasks independently of one another. In general,
these fundamental areas include:
 Navigation
 Specific Business Function
 Data Verification
 Return
In order to accomplish this, it is necessary to separate ‘data’ from ‘function’. This allows an
automated test script to be written for a business function, using data-files to provide the both the
input and the expected-results verification to meet ‘data-driven’ objectives as described in this
document. A hierarchical architecture is employed using a structured or modular design.
The highest tier in the script hierarchy is the ‘driver’ script, which is the engine of the test. The
‘driver’ begins a chain of calls to the lower-level hierarchical components of the test. ‘Drivers’ may
perform one or more test case scenarios by calling one or more main scripts. The main scripts
contain the test case logic, calling the business function scripts necessary to do the application
testing. All utility scripts and functions are called as needed by driver scripts, main scripts and
business function scripts.

PwC - Test Automation - Enterprise Testing Methodology (ETM)


20
Draft for discussion purposes only

4.6. Automated Script Framework


Driver Scripts – Perform initialization (if required), then call the main scripts in the
desired order
Main Scripts – Perform the application test case logic using ‘Business Function’ scripts
Business Function Scripts – Perform specific business functions within the application
Subroutine Scripts – Perform application specific tasks required by two or more business function
scripts
User-Defined Functions – General, Application-Specific, and Screen-Access Functions

Note: “Functions” can be called from any of the above script types.

The automated test script framework is depicted in the graph below:

Driver Script

Utility Script User Defined


Sets Environmental Variables if Data Pool Connections
existe Functions or Library

Main Scripts

Subroutine
Business Scripts
Function Scripts
BFS BFS

Results to Test
Manager/Test Director

Automated Test Script Framework


Advantages
 Utilizing a modular design and using files or records to both input and verify data,
reduces redundancy and duplication of effort in creating automated test scripts.

PwC - Test Automation - Enterprise Testing Methodology (ETM)


21
Draft for discussion purposes only

 Scripts may be developed while application development is still in progress. If


functionality changes, only the specific ‘Business Function’ script needs to be updated.
 Since scripts are written to perform and test individual business functions, they can easily
be combined in a “higher level” test script in order to accommodate complex test
scenarios.
 Data input/output and expected results are attached to the test script in ALM as easily
maintainable text records. The users expected results are used for verification, which is a
requirement for System Testing.
 Functions that return ‘TRUE’ or ‘FALSE’ values to the calling script, rather than aborting,
allowing for more effective error handling, and increasing the robustness of the
automated test scripts. This, along with a well-designed “recovery” routine, enables
“unattended” execution of test scripts.
Disadvantages
 Requires proficiency in the VB-Scripting language used in Unified Functional Testing
(UFT)
 Test script debugging is time-consuming, often requiring automation test engineers to
locate and repair small and ambiguous parameters, such as the size of a pop-up window,
which are preventing the script from running correctly
 Effort to maintain test scripts is significant if there is any change in functionality
 Maintenance of the automated test data set may become difficult as more functionality is
added to the automated suite

4.7. Automated Test Script Example


The following steps illustrate these automated test script development concepts using a generic
financial services transaction scenario titled “Post a Payment”. The flow of this transaction is as follows:
1. Access ‘Payment’ screen from main menu
2. Post a payment
3. Verify that ‘Payment’ updates the ‘Current Balance’
4. Return to main menu
5. Access ‘Payment Summary Screen’ from main menu
6. Verify ‘Payment Summary’ updates
7. Drill down to ‘Transaction Details’ screen from ‘Payment Summary’
8. Verify the transaction details
9. Return to the main menu

A ‘Business Function’ and ‘Subroutine’ data-driven automated test script could be written as follows:
‘Payment’ Function
1. Start at Main Menu
2. Invoke a “Screen Navigation Function” to access the ‘Payment’ Screen

PwC - Test Automation - Enterprise Testing Methodology (ETM)


22
Draft for discussion purposes only

3. Read a data file containing specific data to enter for this test and input the data
4. Press the button or function-key required to ‘Post’ the payment
5. Read a data file containing specific expected results data
6. Compare this data to the data which is currently displayed in the conformation screen
7. Write any discrepancies to an error report
8. Press button or key required to return to Main Menu or, if required, invoke a “Screen
Navigation Function” to do this.

‘Verify_Acct’ (Verify Account Summary & Transaction History) Subroutine


1. Start at Main Menu
2. Invoke a “Screen Navigation Function” to access the ‘Payment Summary
3. Read a data file containing specific expected results data
4. Compare this data to the data which is currently displayed in ‘Payment Summary’
5. Write any discrepancies to an error report
6. Press button or link required to access drill down of transaction history
7. Read a data file containing specific expected results data
8. Compare this data to the data which is currently displayed.
9. Write any discrepancies to an error report
10. Press button or key to return to Main Menu or, invoke a “Screen Navigation Function”

The ‘Driver’ script would set the test environment, load the user-defined functions, and then call the
test case script, which invokes the “Business Function” and “Subroutine” to perform the above test.
The test script would call these two scripts the number of times required to perform all the required
test cases’ data scenarios of this kind. In each case, the only thing that changes are the data contained
in the files that are read and processed by the “Business Function” and “Subroutine” scripts.

PwC - Test Automation - Enterprise Testing Methodology (ETM)


23
Draft for discussion purposes only

Using this method, if we needed to process 50 different kinds of payments in order to verify all of the
possible conditions, then we would need only 4 scripts which are re-usable for all 50 cases:
 the “Driver” script,
 the “Test Script” (Post a Payment & Verify Results) script,
 the “Payment” Business Function script, and
 the “Verify Payment Summary & Transaction Details” Subroutine script.
It should be noticed that the “Subroutine” script, which verifies the payments summary and
transaction details, can also be used by other test cases and business functions (which is why it is
classified as a “Subroutine” script rather than a “Business Function” script) – for example payment
reversals, adjustments and forecasts. Also, if different accounts need to be used, then all that needs to
be done is to update the data pools and not the actual scripts.
The graphic on the following page indicates how the framework for these test scripts will be modeled.

4.8. Automation Test Script Design and Flow


To be most effective and efficient, test scripts must be carefully designed. Well-designed test
scripts have the highest likelihood of finding the most errors with a minimum amount of time and
effort.
Test scripts should have detailed instructions that explain what is required to test the script, and
should have clear comments describing what each function in the script does.
Test scripts should be developed using modular design for the repeatable and reusable test
scripts. This is the key function to the project, in that they provide consistent, repeatable results
and are reusable for executing different scenarios, and future project use. To fully implement this
re-use strategy, the automated test script development should build on the manual ‘data-driven’
testing approach.
Test scripts to be automated should first be identified during the design phase of the project.
During informal working sessions with system testers and business, the automation testers will
modify, add, or delete from the initial list of test scripts from the automation scope. Then, during
the artifact quality review of the test scripts, the review team confirms the definition of test scripts
as well as confirms the nominations for the formal review and sign-off.
During informal working sessions system testers and business may review and suggest changes to
the scripts based on their knowledge of the system and its functionality. All test scripts are
designed such that they are not dependent upon other systems. This enables testers to operate
independently and not wait for code from other subsystems.
Scenarios are defined as the ‘Test Sets’ in the ‘Test Lab’ module and the driver script drives them.
Test suites generally leverage existing test scripts and add connectors between the scripts as
necessary in the driver script. Test sets can exist within the boundaries of a single
platform/application or multiple applications. The test sets reference multiple test scripts and are
referred to as “Integration” tests, which are more complex in nature and typically require
significant data preparation.
The Test Analyst executes the test set manually or by an unattended process through the HP’s
Automation Center interface in the ‘Test Lab’ module of ALM. The test set is a script which
consists of calls to multiple scripts for a scenario. Driver scripts have the functionality to perform
initialization then call the main scripts in the desired hierarchical order. Main scripts perform the
application test case logic using business function scripts or the subroutine scripts and user-
defined functions. Main scripts call business function scripts to perform the specific business

PwC - Test Automation - Enterprise Testing Methodology (ETM)


24
Draft for discussion purposes only

function in the script, and the subroutine script performs the multiple tasks of the business
function scripts.

4.9. Building the Regression Test Suite


The purpose of regression testing is to validate that changes to the existing software did not
adversely affect existing functionality or interfaces. Regression testing is a technique applied at
any level of testing to detect any unintended side effects resulting from changes made to an
established software base.
In nominating test scripts for the regression suite, the tester will be adopting the Selective /
Coverage test approach. Nominations for inclusion in the regression suite will include all of the
primary business flows identified in the requirements. Additionally, depending on the complexity
of the functionality and the business risk of leaving errors undetected, the test analyst will
nominate additional alternate and exception flows through the program.
This approach requires that the business, testing and development teams work together to
nominate the regression test scripts. The developer provides an assessment of the code
complexity for functionality within the requirements and the business analyst identifies the risk
to the business of potentially leaving errors undetected. Where the code complexity and/or the
risk are high, the associated test scripts will be nominated for inclusion in the regression suite.

4.10. Automated Testing Data


Test data is one of many critical key components to a successful test automation effort. The
proper definition, creation and management of test data are a complex set of activities involving
the cooperative efforts of test analysts, database analysts and systems engineers. Planning for test
data should commence as soon as the data requirements are defined in the system specification
documents. Automation test data planning should include test data baselining and test data
restore procedures.
Test data creation and maintenance is required to support all phases of testing. If production
data or data from prior releases is available, the test team can use this data as a baseline for the
subsequent system test phase. Modifications resulting from new functionality being introduced
in the system need to be identified, defined, and a plan established to upgrade the baseline
dataset.
The baseline dataset often starts as a manageable dataset for each product or function, which is
beneficial if the database schemas are not stable or finalized until the development phase is
complete. As the test environments mature for each of the products, a larger dataset can evolve to
support the full regression test requirements.

PwC - Test Automation - Enterprise Testing Methodology (ETM)


25
Draft for discussion purposes only

4.11. Automated Testing Data Leading Practices


The ‘Test Data’ section of the automation test plan should describe:

 Test database names and schemas

 Database access permission groups and login info

 Record size

 Dataset profile

 Test data request process

 Test data backup procedures

 Test data restore schedules

 Regression test dataset creation

This information will be used by in preparation for testing activities.


The test database names, schemas, and access information is used during test script development to
provide the user with all information required to access and validate test results. Dataset profile
information will indicate the size (how many individual customer records) of the test data being used.
This information can be used to assign specific customer record data to each data pool (i.e. Test
Analyst #1 has been assigned the following 25 accounts to use in their data pool creation). This
practice will allow for testers to reference the customer account to identify how the data has been
setup for automated testing.

4.11.1. Service-Oriented Architecture Testing


Service-oriented architecture (SOA) incorporates five key principles into all IT development and
deployment activities across the organization. These principles: portability, cross-platform
interoperability, platform independence, productivity and scalability, all guide SOA development
toward supporting the end-to-end processes that generate the greatest value for a business.
SOA testing focuses on testing the embedded workflow and integration layers of technical
business components, rather than functional testing through a user interface. Testing these
application integration points may reduce the need to perform end-to-end regression testing of
complete systems after development enhancements are made on one component of the system.

PwC - Test Automation - Enterprise Testing Methodology (ETM)


26

Das könnte Ihnen auch gefallen