Beruflich Dokumente
Kultur Dokumente
Our proposed Automated Software Testing technical process needs to be flexible enough
to allow for ongoing iterative and incremental improvement feedback loops, including
adjustments to specific project needs. For example, if test requirements and test cases
already exist for a project, Automated Software Testing will evaluate the existing test
artifacts for reuse, modify them as required and mark them as to-be-automated, instead
of re-documenting the test requirement and test case documentation from scratch. The
goal is to reuse existing components and artifacts and use/modify as appropriate,
whenever possible.
The Automated Software Testing phases and selected best practices need to be adapted to
each task at hand and need to be revisited and reviewed for effectiveness on an ongoing
basis. An approach for this is described here.
The very best standards and processes are not useful if stakeholders dont know about
them or dont adhere to them. Therefore Automated Software Testing processes and
procedures are documented, communicated, enforced and tracked. Automated Software
Testing process training will need to take place.
Our processes best practices span all phases of the Automated Software Testing lifecycle. For example, in the Requirements Phases an initial schedule is developed and is
then maintained throughout each phase of the Automated Software Testing
implementation (e.g. update percentage complete to allow for program status tracking,
etc). See the section on Quality Gates activities, related to schedules.
Weekly status updates are also an important ingredient for successful system program
management, which spans all phases of the development life-cycle. See our section on
Quality Gates, related to status reporting.
Post-mortems or lessons learned play an essential part in these efforts, which are
conducted in order to help avoid repeats of past mistakes in ongoing or new development
efforts. See our section on Quality Gates, related to inspections and reviews.
By implementing quality gates and related checks and balances along Automated
Software Testing, the team is not only responsible for the final testing automation efforts,
but also to help enforce that quality is built into the entire Automated Software Testing
life-cycle throughout. The Automated Software Testing team is held responsible for
defining, implementing and verifying quality.
It is the goal of this section to provide program management and the technical lead a solid
set of Automated Software Testing technical process best practices and recommendations
which will ultimately improve the quality of the Automated Software Testing program,
increase productivity with respect to schedule and work performed, and aid successful
Automated Software Testing efforts, avoiding failures.
Our proposed overall project approach to accomplish automated testing for a specific
effort is listed in the project milestones below.
Automated Software Testing Phase 1: Requirements Gathering Analyze Automated Testing Needs
Phase 1 will generally begin with a kick-off meeting. The purpose of the kick-off
meeting is to become familiar with the AUTs background, related testing processing,
automated testing needs, and schedules. Any additional information regarding the AUT
will be also collected for further analysis. This phase serves as the baseline for an
effective Automated Software Testing program, i.e. the test requirements will serve as a
blueprint for the entire Automated Software Testing effort.
The following information is desirable to be available regarding each application:
Requirements
Test Cases
Test Procedures
Expected Results
Interface Specifications
In the event needed information is not available, the automator will work with the
Customer to derive and or develop as needed.
Additionally, during this phase Automated Software Testing efforts will generally follow
this process:
tests but also to validate the results. Depending on the nature of the application and
tests, validation of results can often take significantly longer than the time to execute
the tests.
Based on this analysis the automator would then develop a recommendation for testing
tools and products most compatible with the AUT. This is an important step often
overlooked. When overlooked and tools are simply bought up front without
consideration for the application, the result is less than optimum results in the best case
and simply not being able to use the tools in the worst case.
At this time, the automator would also identify and develop additional software as
required to support automating the testing. This software would provide interfaces and
other utilities as required to support any unique requirements while maximizing the use
of COTS testing tools and products. A brief description is provided below:
The final step for this phase will be to complete the Automated Software Testing
configuration for the application(s) including the procurement and installation of the
recommended testing tools and products along with the additional software utilities
developed.
The products of this Automated Software Testing Phase 1 will typically be :
i. Report on Test Improvement Opportunities, as applicable
ii. Automation Index
iii. Automated Software Testing test requirements walkthrough
with stakeholders, resulting in agreement
iv. Presentation Report on Recommendations for Tests to
Automate, i.e. Test Requirements to be automated
v. Initial summary of high level test automation approach
vi. Presentation Report on Test Tool or in-house development
needs and associated Recommendations
vii. Automated Software Testing Software Utilities
viii. Automated Software Testing Configuration for Application
Also during the manual test assessment, pass / fail status as determined through manual
execution will be documented. Software Trouble Reports will be documented
accordingly.
The products of Automated Software Testing Phase 2 will typically be :
i. Documented manual test cases to be automated (or
modified existing test cases and marked as to-beautomated)
ii. Test Case Walkthrough and priority agreement
iii. Test Case implementation by phase/priority and timeline
iv. Populated Requirements Traceability Matrix
v. Any Software Trouble Reports associated with manual test
execution
vi. First draft of Automated Software Testing Project Strategy
and Charter (as described in the Project Management
portion of this document)
Another focus of the test program review includes an assessment of whether Automated
Software Testing efforts satisfy completion criteria and the AUT automation effort has
been completed. The review could also include an evaluation of progress measurements
and other metrics collected, as required by the program.
The evaluation of the test metrics should examine how well original test program
time/sizing measurements compared with the actual number of hours expended and test
procedures developed to accomplish the Automated Software Testing effort. The review
of test metrics should conclude with improvement recommendations, as needed.
Just as important, we will document the activities that Automated Software Testing
efforts performed well and were done correctly in order to be able to repeat these
successful processes.
Once the project is complete, proposed corrective actions will surely be beneficial to the
next project, but the corrective actions, applied during the test program, can be significant
enough to improve the final results of the test program.
Automated Software Testing efforts will adopt, as part of its culture, an ongoing iterative
process of lessons learned activities. This approach will encourage Automated Software
Testing implementers to take the responsibility to raise corrective action proposals
immediately, when such actions potentially have significant impact on Automated
Software Testing test program performance. This promotes leadership behavior from
each test engineer.
The products of phase 5 will typically be:
i.
Final Report
Quality Gates
Internal controls and quality assurance processes verify each phase has been completed
successfully, while keeping the customer involved. Controls include Quality Gates for
each Automated Software Testing phase, such as Technical Interchanges and
Walkthroughs that include the customer, use of Standards, and Process Measurement.
Successful completion of the activities prescribed by the process should be the only
approved gateway to the next phase. Those approval activities or quality gates include
technical interchanges, walkthroughs, internal inspections, examination of constraints and
associated risks, configuration management; tracked and monitored schedules and cost;
corrective actions; and more as this section describes. Figure 1 below reflects typical
Quality Gates, which apply to the Automated Software Testing milestones.
Figure 2 Automated Software Testing Phases, Milestones and Quality Gates (ATRT = Automated Test
and Re-test)
Our process controls verify that the output of one stage represented in Figure 2 is fit to be
used as the input to the next stage. Verifying that output is satisfactory may be an
iterative process, and verification is accomplished by customer review meetings; internal
meetings and comparing the output against defined standards and other project specific
criteria, as applicable. Additional quality gates activities will take place as applicable, for
example:
all Automated Software Testing deliverables, i.e. test requirements, test cases, Automated
Software Testing design and code, and other software work products, such as test
procedures and automated test scripts. They consist of a detailed examination by a person
or a group other than the author. These interchanges and walkthroughs are intended to
detect defects, non-adherence to Automated Software Testing standards, test procedure
issues, and other problems.
Examples of technical interchange meetings include an overview of test requirement
documentation. When Automated Software Testing test requirements are defined in
terms that are testable and correct, then errors are prevented from entering the Automated
Software Testing development pipeline, which would eventually be reflected as possible
defects in the deliverable. Automated Software Testing design component walkthroughs
can be performed to ensure that the design is consistent with defined requirements,
conforms to standards and applicable design methodology and errors are minimized.
Technical reviews and inspections have proven to be the most effective form of
preventing miscommunication, allowing for defect detection and removal.
Internal Inspections
In addition to customer technical interchanges and walkthroughs, internal the automator
inspections of deliverable work products will take place, to support the detection and
removal of defects early in the Automated Software Testing development and test cycle;
prevent the migration of defect to later phases; improve quality and productivity; and
reduce cost, cycle time, and maintenance efforts.
unnoticed until too late, instead our process assures problems are addressed and corrected
immediately.
Configuration management
The automator incorporates the use of configuration management tools which will allows
us to control the integrity of the Automated Software Testing artifacts. For example, we
will include all Automated Software Testing automation framework components, script
files, test case and test procedure documentation, schedules, cost tracking and more under
configuration management. Using a configuration management tool assures us that the
latest and accurate version control and records of Automated Software Testing artifacts
and products are maintained. IDT is currently utilizing the Subversion software
configuration management tool in order to maintain Automated Software Testing product
integrity, and will continue to evaluate the best products available to allow for most
efficient controls.
requirements are prioritized. This prioritization will allow including upfront and
prioritizing the most critical tasks to be completed vs. the less critical and lower priority
tasks, which can then be moved to later in the schedule, accordingly.
After Automated Software Testing phase 1 an initial schedule is presented to the customer
for approval. During the Technical Interchanges and Walkthroughs, schedules are
presented on an ongoing basis to allow for continuous schedule communication and
monitoring. Potential schedule risks will be communicated well in advance and risk
mitigation strategies will be explored and implemented, as needed, i.e. any potential
schedule slip will be communicated to the customer immediately and any necessary
adjustment will be made accordingly.
Tracking schedules on an ongoing basis also contributes to tracking and controlling costs.
Corrective Actions/Adjustments
QA processes will allow for continuous evaluation of Automated Software Testing task
implementation. QA processes are in place to assure successful implementation of
Automated Software Testing efforts. If a process is too rigid however, its implementation
can be set up for failure. Even with the best laid plans and implementations of them,
corrective actions need to be taken and adjustments need to be made.
Our QA processes allow for and support the implementation of necessary corrective
actions. They allows for strategic course correction, schedule adjustments, deviation from
Automated Software Testing Phases, to adjust to specific project needs, as needed, which
will allow for continuous process improvement, and an ultimate successful delivery.
Corrective actions and adjustments will only be made to allow for Automated Software
Testing implementation improvements. No adjustments will be made without discussion
changes with the customers first to communicate the change, i.e. why an adjustment is
recommended, the impact of not making the change, and to get buy-in.
This process is based on the Automated Testing Lifecycle Methodology (ATLM) described in the book Automated
Software Testing - A diagram that shows the relationship of the Automated Software Testing technical process to the
Software Development Lifecycle will be provided here
ii
As used at IDT
iii
Implementing Automated Software Testing, Addison Wesley Feb 2009, Chapter 3 discussed ROI in detail