Sie sind auf Seite 1von 15

Part III: Automated Software Testing Processes (by Elfriede

Dustin derived from our book Implementing Automated


Software Testing, Feb 2009)
Our experience has shown and the results of the IDT survey we conducted over a course
of a year with over 700 responses from all over the world also indicated that many
automated software testing efforts fail, due to lack of process.
Implementing a successful Automated Software Testing effort requires a well defined,
structured, but lightweight technical process, using minimal overhead and is described
here. The Automated Software Testing technical process described here is based on
proven system and software engineering processesi. It consists of 5 phases, each requiring
the pass of a quality gate, before moving on to the next phase. By implementing quality
gates, we will help enforce that quality is built into the entire Automated Software Testing
implementation, thus preventing late and expensive rework. See Figure 2 - Automated
Software Testing Quality gates.
The overall process implementation can be verified via inspections, quality checklists
and other audit activities, each further discussed later in this section.

Figure 1 The ATLMii

At IDT we modified the described Automated Testing Lifecycle Methodology (ATLM) to


adapt to our needs, see Figure 1 IDTs modified ATLM a process thats further
defined next.

Our proposed Automated Software Testing technical process needs to be flexible enough
to allow for ongoing iterative and incremental improvement feedback loops, including
adjustments to specific project needs. For example, if test requirements and test cases
already exist for a project, Automated Software Testing will evaluate the existing test
artifacts for reuse, modify them as required and mark them as to-be-automated, instead
of re-documenting the test requirement and test case documentation from scratch. The
goal is to reuse existing components and artifacts and use/modify as appropriate,
whenever possible.
The Automated Software Testing phases and selected best practices need to be adapted to
each task at hand and need to be revisited and reviewed for effectiveness on an ongoing
basis. An approach for this is described here.
The very best standards and processes are not useful if stakeholders dont know about
them or dont adhere to them. Therefore Automated Software Testing processes and
procedures are documented, communicated, enforced and tracked. Automated Software
Testing process training will need to take place.
Our processes best practices span all phases of the Automated Software Testing lifecycle. For example, in the Requirements Phases an initial schedule is developed and is
then maintained throughout each phase of the Automated Software Testing
implementation (e.g. update percentage complete to allow for program status tracking,
etc). See the section on Quality Gates activities, related to schedules.
Weekly status updates are also an important ingredient for successful system program
management, which spans all phases of the development life-cycle. See our section on
Quality Gates, related to status reporting.
Post-mortems or lessons learned play an essential part in these efforts, which are
conducted in order to help avoid repeats of past mistakes in ongoing or new development
efforts. See our section on Quality Gates, related to inspections and reviews.
By implementing quality gates and related checks and balances along Automated
Software Testing, the team is not only responsible for the final testing automation efforts,
but also to help enforce that quality is built into the entire Automated Software Testing
life-cycle throughout. The Automated Software Testing team is held responsible for
defining, implementing and verifying quality.
It is the goal of this section to provide program management and the technical lead a solid
set of Automated Software Testing technical process best practices and recommendations
which will ultimately improve the quality of the Automated Software Testing program,
increase productivity with respect to schedule and work performed, and aid successful
Automated Software Testing efforts, avoiding failures.

Automated Software Testing Phases and Milestones:


Independent of the AUTs specific need, Automated Software Testing will implement a
structured technical process and approach to automated testing and a specific set of
Automated Software Testing phases and milestones to each program.
Those phases consist of:

Automated Software Testing Phase 1 - Requirements Gathering


Analyze Automated Testing needs, plus develop high level Automated
Test Strategies
Automated Software Testing Phase 2 - Design & Develop Test Cases
Automated Software Testing Phase 3 - Automation Framework and
Test Script development
Automated Software Testing Phase 4 - Automated Test Execution and
Results reporting
Automated Software Testing Phase 5 Program Review

Our proposed overall project approach to accomplish automated testing for a specific
effort is listed in the project milestones below.

Automated Software Testing Phase 1: Requirements Gathering Analyze Automated Testing Needs
Phase 1 will generally begin with a kick-off meeting. The purpose of the kick-off
meeting is to become familiar with the AUTs background, related testing processing,
automated testing needs, and schedules. Any additional information regarding the AUT
will be also collected for further analysis. This phase serves as the baseline for an
effective Automated Software Testing program, i.e. the test requirements will serve as a
blueprint for the entire Automated Software Testing effort.
The following information is desirable to be available regarding each application:

Requirements
Test Cases
Test Procedures
Expected Results
Interface Specifications

In the event needed information is not available, the automator will work with the
Customer to derive and or develop as needed.
Additionally, during this phase Automated Software Testing efforts will generally follow
this process:

1. Evaluate AUTs current manual testing process and determine


a.
b.
c.
d.

areas for testing technique improvement


areas for automated testing
determine current Quality Index, as applicable (depending on AUTs state)
collect initial manual test timelines and duration metrics (to be used as a
comparison baseline for Automated Software Testing ROI)
e. determine the Automation Index i.e. determine what lends itself to
automation (see the next item)
2. Analyze existing AUT test requirements for automate-ability
a. If program requirements or test requirements are not documented the
Automated Software Testing effort will include documenting the specific
requirements that need to be automated to allow for a Requirements
Traceability Matrix (RTM)
b. Requirements are automated based on various criteria, such as
i. Most critical feature paths
ii. Most often reused (i.e. automating a test requirement that only has
to be run once, might not be cost effective)
iii. Most complex area, which is often the most error-prone
iv. Most data combinations, since testing all permutations and
combinations manually is time-consuming and often not feasible
v. Highest risk areas
vi. Test areas which are most time consuming, for example test
performance data output and analysis.
3. Evaluate test automation ROI of test requirement
a. Prioritize test automation implementation based on largest ROIiii
4. Analyze AUTs current life-cycle tool use and evaluate Automated Software
Testing reuse of existing tools
a. Assess and recommend any additional tool use or the required
development
5. Finalize manual test effort baseline to be used for ROI calculation
A key technical objective is to demonstrate a significant reduction in test time using
Automated Software Testing. Therefore this phase involves a detailed assessment of
the time required to manually execute and validate results. The assessment will
include measuring the actual test time required for manually executing and validating
the tests. Important in the assessment is not only the time required to execute the

tests but also to validate the results. Depending on the nature of the application and
tests, validation of results can often take significantly longer than the time to execute
the tests.
Based on this analysis the automator would then develop a recommendation for testing
tools and products most compatible with the AUT. This is an important step often
overlooked. When overlooked and tools are simply bought up front without
consideration for the application, the result is less than optimum results in the best case
and simply not being able to use the tools in the worst case.
At this time, the automator would also identify and develop additional software as
required to support automating the testing. This software would provide interfaces and
other utilities as required to support any unique requirements while maximizing the use
of COTS testing tools and products. A brief description is provided below:

Assess existing automation framework for component reusability


GUI record/playback utilities compatible with AUT display/GUI applications (as
applicable in rare cases)
Library of test scripts able to interface to AUT standard/certified
simulation/stimulation equipment and scenarios used for scenario simulation
Library of tools to support retrieval of test data generation
Data repository for expected results
Library of performance testing tools able to support / measure real time and nonreal time AUTs
Test scheduler able to support distributed testing across multiple computers and
test precedence

The final step for this phase will be to complete the Automated Software Testing
configuration for the application(s) including the procurement and installation of the
recommended testing tools and products along with the additional software utilities
developed.
The products of this Automated Software Testing Phase 1 will typically be :
i. Report on Test Improvement Opportunities, as applicable
ii. Automation Index
iii. Automated Software Testing test requirements walkthrough
with stakeholders, resulting in agreement
iv. Presentation Report on Recommendations for Tests to
Automate, i.e. Test Requirements to be automated
v. Initial summary of high level test automation approach
vi. Presentation Report on Test Tool or in-house development
needs and associated Recommendations
vii. Automated Software Testing Software Utilities
viii. Automated Software Testing Configuration for Application

ix. Summary of Test Environment


x. Timelines
xi. Summary of current manual testing level of effort (LOE) to
be used as a baseline for automated testing ROI
measurements
Once the list of test requirements to be automated has been agreed to by the program,
they can be entered in the Requirements Management tool and/or Test Management tool
for documentation and tracking purposes.

Automated Software Testing Phase 2: Manual Test Case


Development and Review
Armed with the test requirements to be automated defined during phase 1, manual test
cases can now be developed. Please keep in mind that if test cases already exist they can
simply be analyzed, then as applicable mapped to the automated test requirements and
reused, only to be marked as automated test cases. It is important to note for a test to be
automated, manual test cases need to be automated, as computer inputs and expectations
differ from human inputs. Generally, as a best practice, before any test can be automated
it needs to be documented and vetted with the customer to verify its accuracy and the
automators understanding of the test cases is correct. This can be accomplished via a test
case walkthrough.
Deriving effective test cases is important for Automated Software Testing success, i.e.
automating inefficient test cases will result in poor test program performance.
In addition to the test procedures, other documentation, such as the interface
specifications for the software are also needed to develop the test scripts. As required,
the automator will develop any missing test procedures and will inspect the software, if
available, to determine the interfaces if specifications are not available.
Phase 2 additionally includes a collection and entry of the requirements and test cases
into the test manager and/or requirements management tool (RM tool) as applicable.
The end result is a populated requirements traceability matrix inside the test manager and
RM tool which links requirements to test cases. This central repository provides a
mechanism to organize test results by test cases and requirements.
The test case, related test procedure, test data input and expected results from each test
case are also collected, documented, organized and verified at this time. The expected
results provide the baseline which Automated Software Testing will use to determine pass
or fail status of each test. Verification of the expected results will include manual
execution of the test cases and validation that the expected results were produced. In the
case where exceptions are noted, the automator will work with the Customer to resolve
the discrepancies and as needed update the expected results. The verification step for the
expected results ensures Automated Software Testing will be using the correct baseline of
expected results for the software baseline under test.

Also during the manual test assessment, pass / fail status as determined through manual
execution will be documented. Software Trouble Reports will be documented
accordingly.
The products of Automated Software Testing Phase 2 will typically be :
i. Documented manual test cases to be automated (or
modified existing test cases and marked as to-beautomated)
ii. Test Case Walkthrough and priority agreement
iii. Test Case implementation by phase/priority and timeline
iv. Populated Requirements Traceability Matrix
v. Any Software Trouble Reports associated with manual test
execution
vi. First draft of Automated Software Testing Project Strategy
and Charter (as described in the Project Management
portion of this document)

Automated Software Testing Phase 3 : Automated Framework and


Test Script Development
This phase will allow for analysis and evaluation of existing frameworks and Automated
Software Testing artifacts. It is expected that for each subsequent Automated Software
Testing implementation, there will be software utilities and test scripts we will be able to
re-use from previous tasks. During this phase we will determine scripts that can be
reused.
As needed, the automation framework will be modified and test scripts to execute each of
the test cases are developed next. Scripts will be developed for each of the test cases
based on test procedures for each case.
The recommended process for developing an automated test framework or test scripts is
essentially the same as would be use for developing a software application. Keys to
technical approach to developing test scripts is that Automated Software Testing
implementations are based on generally accepted development standards, no proprietary
implementation should be allowed.
This task also includes not only developing each test script but also verifying each test
script works as expected.
The products of Automated Software Testing Phase 3 will typically be :
i. Modified automated test framework; reuse test scripts (as
applicable)

ii. Test case automation newly developed test scripts


iii. High-level Walkthrough of automated test cases with internal
or external customer
iv. Updated Requirements Traceability Matrix

Automated Software Testing Phase 4: Automated Test Execution and


Results Reporting
Next, the tests will be executed using Automated Software Testing and the framework
and related test scripts developed. Pass/Fail status will be captured and recorded in the
test manager. An analysis and comparison of manual and automated test times and
results found ( pass/fail ) will be conducted and then summarized in a test presentation
report.
Depending on the nature of the application and tests, an analysis which characterizes the
range of performance for the application will also be completed.
The products of Automated Software Testing Phase 4 will typically be:
i. Test Report including Pass/Fail Status by Test Case and
Requirement (including updated RTM)
ii. Test Execution Times ( Manual and Automated ) initial ROI
reports
iii. Test Summary Presentation
iv. Automated Software Testing training, as required

Automated Software Testing Phase 5: Program Review and


Assessment
The goal of Automated Software Testing implementations is to allow for continued
improvements. During this phase we will review the performance of the Automated
Software Testing test program in order to determine where improvements can be
implemented to allow for this continued improvement.
Throughout the Automated Software Testing efforts we will collect various test metrics,
many during the test execution phase. It is not beneficial to wait until the end of the
Automated Software Testing efforts to document insights gained into how to improve
specific procedures. When needed, we will alter detailed procedures during the test
program, when it becomes apparent that such changes are necessary to improve the
efficiency of an ongoing activity.

Another focus of the test program review includes an assessment of whether Automated
Software Testing efforts satisfy completion criteria and the AUT automation effort has
been completed. The review could also include an evaluation of progress measurements
and other metrics collected, as required by the program.
The evaluation of the test metrics should examine how well original test program
time/sizing measurements compared with the actual number of hours expended and test
procedures developed to accomplish the Automated Software Testing effort. The review
of test metrics should conclude with improvement recommendations, as needed.
Just as important, we will document the activities that Automated Software Testing
efforts performed well and were done correctly in order to be able to repeat these
successful processes.
Once the project is complete, proposed corrective actions will surely be beneficial to the
next project, but the corrective actions, applied during the test program, can be significant
enough to improve the final results of the test program.
Automated Software Testing efforts will adopt, as part of its culture, an ongoing iterative
process of lessons learned activities. This approach will encourage Automated Software
Testing implementers to take the responsibility to raise corrective action proposals
immediately, when such actions potentially have significant impact on Automated
Software Testing test program performance. This promotes leadership behavior from
each test engineer.
The products of phase 5 will typically be:
i.

Final Report

Quality Gates
Internal controls and quality assurance processes verify each phase has been completed
successfully, while keeping the customer involved. Controls include Quality Gates for
each Automated Software Testing phase, such as Technical Interchanges and
Walkthroughs that include the customer, use of Standards, and Process Measurement.
Successful completion of the activities prescribed by the process should be the only
approved gateway to the next phase. Those approval activities or quality gates include
technical interchanges, walkthroughs, internal inspections, examination of constraints and
associated risks, configuration management; tracked and monitored schedules and cost;
corrective actions; and more as this section describes. Figure 1 below reflects typical
Quality Gates, which apply to the Automated Software Testing milestones.

Figure 2 Automated Software Testing Phases, Milestones and Quality Gates (ATRT = Automated Test
and Re-test)

Our process controls verify that the output of one stage represented in Figure 2 is fit to be
used as the input to the next stage. Verifying that output is satisfactory may be an
iterative process, and verification is accomplished by customer review meetings; internal
meetings and comparing the output against defined standards and other project specific
criteria, as applicable. Additional quality gates activities will take place as applicable, for
example:

Technical Interchanges and Walkthroughs


Technical Interchanges and Walkthroughs together with the customer and the automation
team, represent an evaluation technique that will take place during and as a final step of
each Automated Software Testing phase. These evaluation techniques can be applied to

all Automated Software Testing deliverables, i.e. test requirements, test cases, Automated
Software Testing design and code, and other software work products, such as test
procedures and automated test scripts. They consist of a detailed examination by a person
or a group other than the author. These interchanges and walkthroughs are intended to
detect defects, non-adherence to Automated Software Testing standards, test procedure
issues, and other problems.
Examples of technical interchange meetings include an overview of test requirement
documentation. When Automated Software Testing test requirements are defined in
terms that are testable and correct, then errors are prevented from entering the Automated
Software Testing development pipeline, which would eventually be reflected as possible
defects in the deliverable. Automated Software Testing design component walkthroughs
can be performed to ensure that the design is consistent with defined requirements,
conforms to standards and applicable design methodology and errors are minimized.
Technical reviews and inspections have proven to be the most effective form of
preventing miscommunication, allowing for defect detection and removal.

Internal Inspections
In addition to customer technical interchanges and walkthroughs, internal the automator
inspections of deliverable work products will take place, to support the detection and
removal of defects early in the Automated Software Testing development and test cycle;
prevent the migration of defect to later phases; improve quality and productivity; and
reduce cost, cycle time, and maintenance efforts.

Examination of Constraints and associated Risks


A careful examination of goals and constraints and associated risks will take place, which
will lead to a systematic Automated Software Testing strategy, and will produce a
predictable, higher-quality outcome and will enable a high degree of success.
Combining a careful examination of constraints, as a defect prevention technology,
together with defect detection technologies will yield the best results.
Any constraint and associated risk will be communicated to the customer and risk
mitigation strategies as necessary will be developed.

Risk Mitigation Strategies


Defined QA processes allow for constant risk assessment and review. If a risk is
identified appropriate mitigation strategies will be deployed. We require ongoing review
of cost, schedules, processes and implementation to prevent potential problems to go

unnoticed until too late, instead our process assures problems are addressed and corrected
immediately.

Safeguard Integrity of the Automated Software


Testing Process and Environments
Experience shows that it is important to keep the integrity of Automated Software Testing
processes and environment in mind. A means of safeguarding the integrity of the
Automated Software Testing process includes the test of any new technologies in an
isolated environment, validating that for example the tool performs up to product
specifications and marketing claims, before it is used on any Application-Under-Test or
customer test environment. The automator will also verify that any upgrades to a
technology still run in the current environment. The previous version of the tool may
have performed correctly and new upgrade may perform fine in other environments, but
the upgrade might adversely affect the teams particular environment. Additionally, using
a configuration management tool to baseline the test repository will help safeguard the
integrity of the automated testing process.

Configuration management
The automator incorporates the use of configuration management tools which will allows
us to control the integrity of the Automated Software Testing artifacts. For example, we
will include all Automated Software Testing automation framework components, script
files, test case and test procedure documentation, schedules, cost tracking and more under
configuration management. Using a configuration management tool assures us that the
latest and accurate version control and records of Automated Software Testing artifacts
and products are maintained. IDT is currently utilizing the Subversion software
configuration management tool in order to maintain Automated Software Testing product
integrity, and will continue to evaluate the best products available to allow for most
efficient controls.

Schedules are defined, tracked and communicated


Project schedules are defined, tracked and communicated. Schedule task durations are
determined based on past historical performance and associated best estimates.
Additionally any schedule dependencies and critical paths elements will be considered
up-front and incorporated into the schedule.
In order to meet any schedule, for example if the program is under a tight deadline, only
the Automated Software Testing tasks that can be successfully delivered in time will be
included in the schedule. During the Automated Software Testing phase 1, test

requirements are prioritized. This prioritization will allow including upfront and
prioritizing the most critical tasks to be completed vs. the less critical and lower priority
tasks, which can then be moved to later in the schedule, accordingly.
After Automated Software Testing phase 1 an initial schedule is presented to the customer
for approval. During the Technical Interchanges and Walkthroughs, schedules are
presented on an ongoing basis to allow for continuous schedule communication and
monitoring. Potential schedule risks will be communicated well in advance and risk
mitigation strategies will be explored and implemented, as needed, i.e. any potential
schedule slip will be communicated to the customer immediately and any necessary
adjustment will be made accordingly.
Tracking schedules on an ongoing basis also contributes to tracking and controlling costs.

Costs are tracked and controlled


By closely tracking schedules and other required Automated Software Testing resources,
the automator assures that a cost tracking and controlling process is followed.
Inspections, walkthroughs and other status reporting will allow for a closely monitored
cost control tracking activity.

Tracking Project Performance


Following the processes outlined previously ensure performance requirements are met
during all phases of Automated Software Testing, including new projects, production
support, during upgrades, and technology refreshments. Performance is continuously
tracked with necessary visibility into project performance, related schedule and cost. The
the automator manager maintains the record of planned vs actual delivery dates;
continuously evaluating the project schedule, which is maintained in conjunction with all
project tracking activities, and is presented at weekly status reports and submitted with
the Monthly Status Report.

Corrective Actions/Adjustments
QA processes will allow for continuous evaluation of Automated Software Testing task
implementation. QA processes are in place to assure successful implementation of
Automated Software Testing efforts. If a process is too rigid however, its implementation
can be set up for failure. Even with the best laid plans and implementations of them,
corrective actions need to be taken and adjustments need to be made.

Our QA processes allow for and support the implementation of necessary corrective
actions. They allows for strategic course correction, schedule adjustments, deviation from
Automated Software Testing Phases, to adjust to specific project needs, as needed, which
will allow for continuous process improvement, and an ultimate successful delivery.
Corrective actions and adjustments will only be made to allow for Automated Software
Testing implementation improvements. No adjustments will be made without discussion
changes with the customers first to communicate the change, i.e. why an adjustment is
recommended, the impact of not making the change, and to get buy-in.

Action, Issue and Defect Tracking


A detailed procedure has been defined for tracking action items to completion.
Additionally a procedure exists that allows for tracking issues, system trouble reports
(STRs) or defects to closure. Templates will be used, that describes all elements to be
filled out for these types of reports.

This process is based on the Automated Testing Lifecycle Methodology (ATLM) described in the book Automated
Software Testing - A diagram that shows the relationship of the Automated Software Testing technical process to the
Software Development Lifecycle will be provided here
ii
As used at IDT
iii
Implementing Automated Software Testing, Addison Wesley Feb 2009, Chapter 3 discussed ROI in detail

Das könnte Ihnen auch gefallen