Beruflich Dokumente
Kultur Dokumente
These slides are distributed under the Creative Commons License. In brief summary, you may make and distribute copies of these slides so long as you give the original author credit and, if you alter, transform or build upon this work, you distribute the resulting work only under a license identical to this one. For the rest of the details of the license, see http://creativecommons.org/licenses/by-sa/2.0/legalcode.
Bret Pettichord
bret@pettichord.com www.pettichord.com
Welcome
Getting the most out this seminar Let us know of special interests. Ask questions
n n
Seminar Objectives
Understand the different options for test automation that are available to you. Learn test automation concepts Identify important requirements for success Contrast the benefits of GUI, API and other approaches. Learn which contexts are most suitable for the different approaches. Select a test automation architecture suitable for your project
(c) 2002 Bret Pettichord
Agenda
Introduction
n n n n n
Test Automation Patterns Context Architecture Mission Maintainability Reviewability Dependability Reusability Scripting Frameworks Data-Driven Scripts
Quality Attributes
n n n n
Screen-based Tables Action Keywords Test-First Programming API Tests Thin GUI Consult an Oracle Automated Monkey Assertions and Diagnostics Quick and Dirty
Architectural Patterns
n n
Background
Domains
n n n n n n n n n
Tools
n n n
Technical publishing Expense reporting Sales tracking Database management Systems management Application management Internet access Math education Benefits administration
n n n
SilkTest (QA Partner) WinRunner Rational Robot (TeamTest) TestQuest (Btree) WebLoad QA Run (Compuware) Perl Expect (TCL) Java Korn shell Lisp Python
Languages
n n n n n
Acknowledgements
Much of the material in the course results from discussions with colleagues.
n
Los Altos Workshops on Software Testing (LAWST 1 & 2). Austin Workshops on Test Automation (AWTA 1 & 2). Workshop on Model-Based Testing (Wombat) Other course notes and reviews. 7
Introduction
Introduction Quality Attributes Architectural Patterns Are You Ready to Automate? Concluding Themes
10
11
12
13
15
16
17
older
Insert hooks into the operating system. Replace browser DLLs. Replace operating system DLLs. Supplant shared libraries by changing the load path. Provide their own instrumented window manager. Directly instrument application event loop. 18
19
20
Some subtle system bugs can seriously confuse test tools. Tools require special customization to support custom controls.
(c) 2002 Bret Pettichord
21
Examples:
n
n n n n n
Grids with embedded drop down lists Treeviews Delphi lists 3270 emulator Powerbuilder Icons
22
23
User interface is defined and frozen early. Programmers use late-market, noncustomized interface technologies. There are very few reports of projects actually using Capture Replay successfully.
24
25
What is a Pattern?
A pattern is a solution to a problem in context
n n
The forces that the pattern resolves The consequences of its application
26
Find a way to leverage the skills available for testing. Take advantage of all the interfaces of the software under test. Focus on key features or concerns that can benefit from the added power of automated testing.
Mission
Product Architecture.
n
Test Mission.
n
Staff
Product
27
Understand user perspective and the problem domain Have experience in the targeted user role Anticipate unwritten requirements Understand the technology and the solution domain Have experience with the technology Anticipate technology challenges
A mix of skills improves testing effectiveness Design your test strategy and automation architecture to allow contributions from all 28
Technical Specialists
n n n
Automation Specialists
Understand testing technology n Have experience with test tools (c) 2002 Bret Pettichord automation needs n Anticipate
n
Staffing Models
User Experts + Tools User Experts + Automation Experts Junior Programmers Tester/Programmers Test Expert + Warm Bodies Spare-time Automation Central Automation Team
(c) 2002 Bret Pettichord
29
30
31
Product Architecture
Hardware and Software Multiple machines Distributed architecture Networking Databases Multiple users, multiple user roles Both GUI and non-GUI interfaces are available for testing Event-driven & multi-threaded
(c) 2002 Bret Pettichord
32
What is Architecture?
Selection of tools, languages and components Decomposition into modules
n n
Standard modules which can be acquired Custom modules which must be built
33
35
36
37
Test Mission
What is your test mission?
n n
What kind of bugs are you looking for? What concerns are you addressing? Find important bugs fast Verify key features Keep up with development Assess software stability, concurrency, scalability Provide service 38
Possible missions
n n n n n
Make automation serve your mission. Expect your mission to change. (c) 2002 Bret Pettichord
Two Focuses
Efficiency
n n
Service
n n
n n n
Reduce testing costs Reduce time spent in the testing phase Improve test coverage Make testers look good Reduce impact on the bottom line
n n
Tighten build cycles Enable refactoring and other risky practices Prevent destabilization Make developers look good Increase management confidence in the product
39
Test Strategy
Opportunities for test automation 1 Specify Tests
1. 2.
Execute Tests
3.
Test execution
w External interfaces w Internal interfaces
4.
Results evaluation
w Consulting oracles w Comparing baselines
Verify Results
Test Setup
Software testing usually requires lots of set up activities in preparation for testing
n n n n
Installing Product Software Configuring Operating Systems Initializing Databases Loading Test Data
Many of these activities can be automated. System administration tools are often useful and cost-effective.
(c) 2002 Bret Pettichord
41
42
43
Quality Attributes
Introduction Quality Attributes Architectural Patterns Are You Ready to Automate? Concluding Themes
44
Essential Capabilities
Automation is replacing what works with something that almost works, but is faster and cheaper Professor Roger Needham What trade-offs are we willing to make? Functional requirements for test automation
n
Maintainability
Will the tests still run after product design changes? Will tests for 1.0 work with 2.0? Can tests be easily updated for 2.0? Will tests fail because of changes to the output format?
46
Maintainability
[does not print]
47
48
Reviewability
Can others review the test scripts and understand what is being covered? Are the test scripts documented? Can we make sure that that the automated script matches the original test design? What kind of coverage does the test suite have? How can we know? Is the test testing the right stuff?
(c) 2002 Bret Pettichord
49
Repeatability
Will your test do the exact same thing every time? Is random data embedded in your tests? Do your tests modify objects in a way that prevents them from being re-run?
50
Integrity
Can your test results be trusted? Do you get lots of false alarms? Are you sure that failed tests always appear in the test results? Is it possible for tests to be inadvertently skipped? A Basic Principle Automated tests must fail if the product under test is not installed
(c) 2002 Bret Pettichord
51
Reliability
Will the test suite actually run? Will tests abort? Can you rely on the test suite to actually do some testing when you really need it? Will it run on all the platforms and configurations you need to test?
(c) 2002 Bret Pettichord
52
Dependability
[does not print]
53
Reusability
To what degree can testing assets be reused to create more, different tests? This goes beyond mere repetition. Can you amass a collection of data, procedures, mappings and models that can be reused in new ways to make more testing happen?
(c) 2002 Bret Pettichord
54
Independence
Can your tests be run individually or only as part of a suite? Can developers use them to reproduce defects? Will your tests run correctly if previous tests fail? Will one failure cause all succeeding tests to fail?
(c) 2002 Bret Pettichord
55
Performance
Rarely is it worth optimizing test automation code for performance. Supporting independence and repeatability can impact performance. But performance improvements can complicate tests, reduce reliability and may even damage integrity.
(c) 2002 Bret Pettichord
56
Performance Example
[does not print]
57
Simplicity
Things should be a simple as possible, but no simpler Einstein Complexity is the bugbear of test automation You will need to test your test automation, but you are likely to have few resources for this Therefore your architecture must be as simple and perspicacious as possible
(c) 2002 Bret Pettichord
58
Quality Attributes
Maintainability Reviewability Repeatability Integrity Reliability Reusability Independence Performance Simplicity
(c) 2002 Bret Pettichord
59
60
Reusability
61
Architectural Patterns
Frameworks for developing automated tests
Introduction Quality Attributes Architectural Patterns Are You Ready to Automate? Concluding Themes
62
Our approach:
n
Context
w People w Product w Mission
Test strategy
w Test creation w Test execution w Test evaluation
Quality Attributes
w w w w
63
Scripting Framework
You want to create a lot of tests with out building a lot of custom support code. Therefore, use a generalized scripting framework. Extend it as needed for your project. Most commercial GUI test tools are scripting frameworks with GUI drivers.
(c) 2002 Bret Pettichord
64
65
Mission
n
Tester/Programmers Test Tool Specialists Can automate GUI testing Frameworks also support API and unit testing
Product
n
Automate tests that will last the life of the product Automating tests that are defined in advance Provide maximum flexibility of approach
66
Test Harness
n
Often proprietary Standard languages are preferred Written in the scripting language May contain calls to library functions
Test Scripts
n
Executes tests and collects test results Optional support for preconditions and postconditions (setup and teardown)
67
Optional Capabilities
n
Run the test scripts Collect test verdicts Report test results
Check test preconditions (abort or correct if not met) Allow selected subsets of tests to run Distribute test execution across multiple machines Distinguished between known failures and new failures Allow remote execution and monitoring Use Error Recovery System (later)
68
Recorders
n
Driver functions for Graphical User Interfaces Identify user interface components using specified qualifiers Insert events into the input stream aimed at those components Requires customization to support custom controls.
Action Recorders
w Record user actions as scripts
Object Recorders
w Report control identifies by class and properties w Assist hand-coding w Spy
Test Evaluation
n
Tests are hand-coded, using capture replay when possible. Product must be available before writing tests. The test harness and scripting language provide an execution environment for the tests.
Test Execution
n
Expected outcomes are hand-coded when tests are created, or Framework supports capturing a baseline for later comparison when test is first run
71
72
Avoid using cut and paste Repeated code smells bad Instead place in functions Tests by their nature use lots of repetition Tests are easier to review if dont include distracting functional calls and conditional statements Tests become less reliable with added complexity
Therefore, use cut and paste when it makes tests easier to understand and modify.
(c) 2002 Bret Pettichord
73
Pilot projects. It serves as a foundation for more complicated architectures. Minimum complexity that provides reasonable flexibility and robustness. User interface is very stable, or product has short life-span. Domain specialists can program, or tests are wellspecified. 74
Dependability
n n
Medium Additional design patterns can be used to insulate tests from interface changes. Testers will require discipline to avoid using unmaintainable constructs Low Tests are written in a scripting language that may not be known by many.
Medium Scripting language provides the freedom to write clever, complex, error-prone code. Medium Framework can be the foundation of other architectures.
Reusability
n n
Reviewability
n n
75
Without a recovery system, test suites are prone to cascading failures. (Domino effect) Close extraneous application windows Shutdown and restart the product Reboot the hardware Reset test files Reinitialize the database Log the error
76
Data-Driven Scripts
Good testing practice encourages placing test data in tables. Hard-coding data in test scripts makes them hard to review and invites error and omissions. Therefore write bridge code to allow test scripts to directly read test parameters from tables.
(c) 2002 Bret Pettichord
77
78
Tester/Programmers create the test procedure scripts Anyone can create the inputs Any functionality that must be tested with lots of variations.
Mission
n
Product
n
80
Caption style
CG format
1 2 3 4
Yes No No Yes
PCX
Large
TIFF
Medium
CG size
81
Test Execution
n
Anyone can enter test parameters in a spreadsheet Tests may be automatically generated using spreadsheet formulas or external programs Live data may be used from legacy systems
Tests are executed by data-driven scripts Specify expected results with test inputs, or Deliver input parameters with outputs to facilitate manual verification
Test Evaluation
n
82
Test procedure scripts that execute in a Scripting Framework Stored in a spreadsheet Frame work library allows test script to read spreadsheet data
83
Bridge Code
n
You intend to run lots of variations of a test procedure Domain specialists are defining tests but are not programmers You have automation specialists You would like to reduce dependency on specific test tool
84
Dependability
n n n
High. Test procedure script can be adapted to interface changes without requiring changes to the data. Medium/High. Test data is easy to review. Test procedure scripts should be double checked with known test data first.
Reviewability.
n n n
Medium. Reviewability helps. Test procedure script must be reviewed to ensure that it is executing as expected. Supporting navigation options can complicate and increase chance of errors. High. Test data may be able to be used with different test procedure scripts.
Reusability
n n
85
Screen-based Tables
Non-programmers want to create and automate lots of tests. They dont want to work through middlemen, and the screen designs are fixed in advance. Therefore, use tables to specify the windows, controls, actions and data for the tests.
(c) 2002 Bret Pettichord
86
87
Mission
n
Many nonprogramming testers Dedicated automation staff User interface defined early It wont change late
Product
n n
Test business logic from a user perspective. Allow tests to be reviewed by anybody Avoiding committing to a test tool
88
Test Execution
n
User domain specialists specify tests, step-by-step in spreadsheets Tests can be written as soon as the screen design is fixed.
Tests are executed by a dispatcher script running in a framework Expected results are also specified in the test case spreadsheets. 89
Test Evaluation
n
Bridge Code
n
Screen-based descriptions stored in spreadsheets Test script that reads in test tables line-byline, and executing them.
Allows dispatcher script to access spreadsheet data Defines the names of widgets on a screen and how they can be accessed. 90
Dispatcher
n
Window Maps
n
Window Maps
Abstraction layer that improves maintainability Provides names and functions for conveniently accessing all controls or widgets Included in the better test tools
n
Costs to generate depend on tool Using window maps also greatly improve script readability
(c) 2002 Bret Pettichord
91
92
USERNM PWD #1 #2
A B 93
Dependability
n n
Medium. Minor User Interface changes can be handled using the Window Maps High. Tests can be reviewed by almost anyone
Reviewability
n n
High. Error handling and logging is isolated to the execution system, which can be optimized for reliability Low. Test format facilitates replacing GUI test tool if the need arises.
Reusability
n n
94
Action Keywords
You would like easy to read test scripts that can be created by business domain experts who may not be programmers. Therefore define Action Keywords that can appear in spreadsheets yet correspond to user tasks.
95
96
Mission
n
Business domain experts write the test scripts Automation experts write the task libaries and navigation code Typically used with GUI interfaces Also can be used effectively with other interfaces (e.g. telephone)
n n
Product
n n
Support thorough automated testing by business domain experts Facilitate test review Write test scripts before software is available for testing. Create tests that will last.
97
Task library
n n
Tests in spreadsheet format Supported keywords and arguments Mapped to task library functions
Keyword definitions
n
Functions that executed the tasks Written in the scripting language Parses the spreadsheet data and executes the corresponding library function Allows dispatcher script to read spreadsheet data
Dispatcher
n
Bridge code
n
98
Test Execution
n
Business domain specialists create tests in spreadsheets. Automation specialists create keywords and task libraries. Tests can be created early.
Tests are executed using a dispatcher and framework. Expected results are defined as verification keywords when tests are authored. 99
Test Evaluation
n
Action Keywords
Test Cases
Sample Architecture
Window Maps Support for User Interface Custom Controls Driver Custom Testing Hooks User Interface Components
100
Action Keywords
When might you use this?
n
Non-technical domain specialists Tests will be used for a long time Wish to write tests early Expect user interface changes
101
Dependability
n
High Only the task libraries need to be updated when user interfaces change High Test format is easy to understand
Medium. It really depends on how well the dispatcher and task functions are engineered Medium. Tasks can be reused for many tests. 102
Reviewability
n n
Reusability
n n
Column semantics defined by each procedure script for all rows One per test procedure
Control Script
General purpose. General purpose. For many test For many test procedures procedures
103
Task Libraries
Writing good libraries is never simple Two forces make test libraries a particular challenge
n
Test variations present ample opportunities for premature generalization Test libraries are less likely to be tested themselves
104
Focus on grouping tasks in users terms Document start and end states Only create functions that will be used dozens of times Write tests for your libraries
105
They must note and verify start and end states Tasks may require more verification than typically appear in manual test descriptions
(c) 2002 Bret Pettichord
106
107
Creates a customer account and returns to the main screen. Adds specified product to the order sheet Complete the existing order using the specified credit card and address.
108
Encapsulate common user tasks. Facilitate navigating through the elements of the user interface Check user interface elements against expected results Will automatically log errors when appropriate Support preconditions and postconditions Error Recovery System is an example
Navigation Libraries
n
Verification Libraries
n n
109
110
n n
Library design integrity must be adhered to Larger up-front costs Increased complexity
111
Unit Testing
112
113
API Testing
Typically public Often different language from product APIs must be created
Exposed automatically
Applies to units large Typically and small provided for large units only 114
Unit isolation testing Test each unit in isolation Unit integration testing Test units in context
Create stubs or simulators for units being depended on Allow calls to classes, functions and components that the unit requires
115
Test-First Programming
You want to be sure that the code works as soon as it is written. And you want to have regression tests that will facilitate reworking code later without worry. Therefore use Test-First Programming, a technique that uses testing to structure and motivate code development.
n
Many programmers believe that Test-First Programmer results in better design. 116
Mission
n
Programmers use this technique when they develop code Typically used on products using iterative development methodologies
Product
n
Test code before it is checked in All code must have automated tests
117
Write a test for anything that could possibly fail Programmers create test, write some code, write another test Tests are written in the same language as the product code.
Test Execution
n
Tests are executed using a unit testing framework Expected results are specified when the tests are written
Test Evaluation
n
118
Tests
n
119
Dependability
n n
Medium. Tests are maintained with the code. Medium. Can be reviewed by other programmers.
Reviewability
n n
Medium/High. Tests are run before the code is written. This helps to test the tests. Low
Reusability
n
120
API Tests
User interfaces tests are often tricky to automate and hard to maintain. Therefore, use or expose programming interfaces to access the functionality that needs to be tested.
121
Excel Xconq
123
Mission
n
Programmer/Testers write the tests. Good cooperation between testers and product programmers. Any product with some kind of programming interface Any product that can have an interface added
Find an effective way to write powerful automated tests. Testing starts early.
Product
n
124
Client/Server protocol interfaces may be available. APIs or command line interfaces may be available Diagnostic interfaces may also be available.
(c) 2002 Bret Pettichord
125
It may be cheaper to create or expose one for testing than to build GUI test automation infrastructure
126
Test Evaluation
n
Tests are written in a scripting language. Tests are executed in a scripting framework or using a programming language.
Test Execution
n
127
High. APIs tend to be more stable than GUIs Medium/High. Tests are written in a standard scripting language.
Dependability
n n
Reviewability
n n
Reusability
n
128
Unit test Scripting test GUI test tools harnesses harnesses (purchase) (public domain) (public domain) Tests are in same language as product Need to understand API Requires tool training
Anyone can run Anyone can run Must have tool license to run
129
Thin GUI
You want to test the user interface without using a GUI test tool. Therefore, design the GUI as a thin layer of presentation code atop the business logic. Use unit or API test techniques to test the business logic layer.
(c) 2002 Bret Pettichord
130
131
Mission
n
Programmer/Testers Product presentation layer is developed as a thin layer atop the business logic code.
Product
n
132
Test Evaluation
n
Tests are created as unit tests. Tests are executed in the unit test framework.
Test Execution
n
133
Dependability
n n
High. Splitting the GUI separates tests from interface changes. Technical issues, however, may interfere. Medium. Other programmers can review. No special tool language used.
Medium. Note that some traditional testing of the GUI is still required. Medium. The unit test framework is being reused.
Reusability
n n
Reviewability
n n
134
Consult an Oracle
You want to evaluate lots of tests. Therefore, Consult an Oracle. An oracle is a reference program that computes correct results.
135
136
Mission
n
Test thoroughly
Product
n
137
Test Evaluation
n
Typically large numbers of tests are generated randomly A testing framework sends the test inputs to both the system under test and the oracle.
Test Execution
n
The results are compared. Typically rules must be defined regarding the domain in which the oracle is considered authoritative and the degree of accuracy that is acceptable. 138
Self Verifying
n
Doesnt check correctness of results Just makes sure the program doesnt crash Independent generation of results Often expensive Compares results from different runs/versions. Gold files
True
n n n
Inputs indicate the correct result Correct result is computed when data is generated Only checks some characteristics of results Often very useful
Heuristic
Consistency
n n
139
Dependability
n n
High High If you save the inputs and outputs, anyone can double check them.
Reviewability
n n
Varies. Mostly this depends on the dependability of the oracle. Dont forget to test your framework and accuracy calculations by seeding errors. High.
Reusability
n
140
Automated Monkey
Users will invariably try more input sequences than you could ever think up in the test lab. Interaction problems, in which a feature only fails after a previous action triggered a hidden fault, are hard to find. Therefore, develop an Automated Monkey. This is a state model of the product under test that can generate lots of test sequences.
141
Home
Add Account
Add Order
Add Account
Add Order
143
144
Test Execution
n
Define a state model that corresponds to part of the product Use algorithms to generate test paths through the state model.
Execute the test paths against the product. As a practical matter, this needs to be automated. Verify that the product is in the correct state for each transition.
Test Evaluation
n
145
Test Paths
n
A state model consists of nodes, which are the states, and edges, which are the transitions between states. The simplest algorithm simply picks a random transition from each node. (Random Walk) Mathematical graph theory provides several algorithms that can be used to ensure specific levels of coverage.
Generation Algorithm
n
A test path is a chain of transitions that traverses the model. Each transition is an action and each node on the chain is a state. This script executes the test path. A verification method is executed for each node.
Execution Engine
n
146
Mission
n
Product
n
147
Dependability
n n
Varies Low/Medium State models may be hard to review It helps if tests are generated in a reviewable form
Reviewability
n n
Varies. Depends on the state verification functions. (Instrumentation may be required.) Varies
Reusability
n
148
Assertions are statements of invariants: when these fail, youve found a bug. Diagnostics are warnings: further analysis is required. 149
150
Mission
n
Tester/Programmers Programmer/Testers Many. Many standard components have diagnostic interfaces built in.
Product
n n
151
Assertions. These logical statements in the code make assumptions explicit. If false, there must be a bug. Typically assertion checking is only done during testing and debugging. Database Integrity Checks. A program checks the referential integrity of a database, reporting errors found. Code Integrity Checks. Compute a checksum to see whether code has been overwritten. Memory Integrity Checks. Modify memory allocation to make wild pointers more likely to cross application memory allocations and trigger memory faults. Fault Insertion. Allow error handling code to be triggered without having to actually create the error conditions (e.g. bad media, disk full) Resource Monitoring. Allow configuration parameters, memory usage and other internal information to be viewed.
152
Test Evaluation
n
Varies. Tests can be created using Automated Monkey. Manual Exploratory testing is also supported. Varies. May require debug version of software.
Test Execution
n n
Assertions report that errors occurred. Diagnostics must be analyzed, either to help debug assertion-errors or to suggest problems lying in wait.
153
Dependability
n n
Medium/High. Assertions and Diagnostics should be revised as the code is changed. Medium. This makes the code execution easier to understand. Diagnostics may help provide information regarding test coverage.
Reviewability
n n
Medium. Depends on how well Assertions and Diagnostics have been added. High. Any test can use them.
Reusability
n n
154
n n
Focus on smoke tests, configuration tests, tests of variations, and endurance tests. Plan to throw away code. In the process, learn about your tools and the possibilities for automation.
155
Code Coverage Measurement Memory Leak and other specialized testing Test compliance with interface standards Collect performance metrics Confirm pre-defined release criteria 156
Tester/Programmers Any Automate tests that will pay back quickly for the time invested in creating them.
157
Product
n
Mission
n
Test Execution
n
Test Evaluation
n
158
Dependability
n n
Low/None. Low
Reviewability
n
Varies. Youre really depending on the people who create and run the tests. Low
Reusability
n
159
[Keeping it Simple]
Scripting Frameworks
Architecture Patterns
160
Introduction Quality Attributes Architectural Patterns Are You Ready to Automate? Concluding Themes
161
6. No humans were harmed in the testing of this software. 7. Big bucks already spent on the test tool. 8. Looks good on the resume. 9. No Testing for Dummies book ... yet. 10. Keep the intern busy. 162
163
Pilot Project
Validate your tools and approach Demonstrate that your investment in automation is well-spent Quickly automate some real tests Get a trial license for any test tools Scale your automation project in steps
164
Perspectives Differ
Roles
n
n n n n
n n
Speed up testing Allow more frequent testing Reduce manual labor costs Improve test coverage Ensure consistency Simplify testing Define the testing process Make testing more interesting and challenging Develop programming skills Justify cost of the tools Of course well have automation!
165
Speed up testing Allow more frequent testing Reduce manual labor costs Improve test coverage Ensure consistency Simplify testing Just want testing to go away
166
n n n
Define the testing process Make testing more interesting and challenging Develop programming skills Justify cost of the tools Of course well have automation!
167
Unreasonable
n n n n
Mixed Bag
n n (c) 2002 Bret Pettichord
Success Critera
What are your success criteria? The automation runs The automation does real testing The automation finds defects The automation saves time What bugs arent you finding while you are working on the automation? What is the goal of testing?
(c) 2002 Bret Pettichord
169
Ready to Automate?
1. 2. 3. 4. 5. 6. 7. 8.
Is automation or testing a label for other problems? Are testers trying to use automation to prove their prowess? Can testability features be added to the product code? Do testers and developers work cooperatively and with mutual respect? Is automation is developed on an iterative basis? Have you defined the requirements and success criteria for automation? Are you open to different concepts of what test automation can mean? Is test automation lead by someone with an understanding of both programming and testing?
170
Ready to Automate?
[does not print]
171
55 or less Nevermind
(c) 2002 Bret Pettichord
172
Concluding Themes
What Have We Learned?
Introduction Quality Attributes Architectural Patterns Are You Ready to Automate? Concluding Themes
173
Keep It Simple
Test Automation tends to complicate testing The tests suite itself will need to be tested Make sure the test suite meets the original goals
174
175
176
177
178
Commitment Is Essential
It is easy for test automation to be designated as a side project. It wont get the resources it needs. Commitment ensures that test automation gets the resources, cooperation and attention that it needs.
n n
180
181
Resources
Books, Websites, Consultation
182
The first section provides a general overview of test automation with a description of common practices. The second collects accounts from various automators describing their projects. Describes Scripting Framework, Data-driven Scripts, Screen-based Tables, and Action Keywords. A detailed elaboration of Action Keywords. A concise description of Screen-based Tables. Contains a chapter on Automation Monkey by Noel Nyman. Describes API-based testing.
183
Chapter on test automation has been described as more useful than any of the books on test automation. Understand how to customize your testing strategy based on the architecture of your system. Describes how to test effectively when programmers dont follow the rules. Details 23 attacks for uncovering common bugs, including faultinsertion techniques.
184
Websites
Reference Point: Test Automation, Pettichord (Readings)
n
QA Forums, qaforums.com
Good place to get current tool-specific information. Has boards for all the popular test tools. 185 (c) 2002 Bret Pettichord
n
Free Consultation
As a student in this seminar you are entitled to a free consultation.
n n n
Send me an email describing your situation. Remind me that you attended this seminar.
n
If you want to talk on the phone let me know good times when you can be reached. bret@pettichord.com, 512-302-3251
Contact
n (c) 2002 Bret Pettichord
186
Bibliography
[does not print]
187