Sie sind auf Seite 1von 187

Test Automation Architectures

These slides are distributed under the Creative Commons License. In brief summary, you may make and distribute copies of these slides so long as you give the original author credit and, if you alter, transform or build upon this work, you distribute the resulting work only under a license identical to this one. For the rest of the details of the license, see http://creativecommons.org/licenses/by-sa/2.0/legalcode.

August 2002 (c) 2002 Bret Pettichord

Test Automation Architectures


A Context-Based Approach

Bret Pettichord
bret@pettichord.com www.pettichord.com

August 2002 (c) 2002 Bret Pettichord

Welcome
Getting the most out this seminar Let us know of special interests. Ask questions
n n

During class; or Write them down and share during a break

Share your experience and perspective.

(c) 2002 Bret Pettichord

Seminar Objectives
Understand the different options for test automation that are available to you. Learn test automation concepts Identify important requirements for success Contrast the benefits of GUI, API and other approaches. Learn which contexts are most suitable for the different approaches. Select a test automation architecture suitable for your project
(c) 2002 Bret Pettichord

Agenda
Introduction
n n n n n

Architectural Patterns (Cont.)


n n n n n n n n n

Test Automation Patterns Context Architecture Mission Maintainability Reviewability Dependability Reusability Scripting Frameworks Data-Driven Scripts

Quality Attributes
n n n n

Screen-based Tables Action Keywords Test-First Programming API Tests Thin GUI Consult an Oracle Automated Monkey Assertions and Diagnostics Quick and Dirty

Architectural Patterns
n n

Are You Ready to Automate? Concluding Themes

(c) 2002 Bret Pettichord

Background
Domains
n n n n n n n n n

Tools
n n n

Technical publishing Expense reporting Sales tracking Database management Systems management Application management Internet access Math education Benefits administration

n n n

SilkTest (QA Partner) WinRunner Rational Robot (TeamTest) TestQuest (Btree) WebLoad QA Run (Compuware) Perl Expect (TCL) Java Korn shell Lisp Python

Languages
n n n n n

(c) 2002 Bret Pettichord

Acknowledgements
Much of the material in the course results from discussions with colleagues.
n

Los Altos Workshops on Software Testing (LAWST 1 & 2). Austin Workshops on Test Automation (AWTA 1 & 2). Workshop on Model-Based Testing (Wombat) Other course notes and reviews. 7

(c) 2002 Bret Pettichord

Introduction

Introduction Quality Attributes Architectural Patterns Are You Ready to Automate? Concluding Themes

Test Automation, Patterns, Context, Architecture, Mission

(c) 2002 Bret Pettichord

A Fable: First Episode


Jerry Overworked starts an automation project (on top of everything else he is doing). He cant get the test tool to work. Calls support several times. Eventually the vendor sends out an engineer who gets it to work with their product. Many months have passed. Jerry refuses to work on automation any further.

(c) 2002 Bret Pettichord

A Fable: Second Episode


Kevin Shorttimer takes over. He is young and eager. Is excited about doing test automation. Builds large library and complex testing system. Uses automated tests for testing. Actually finds bugs. Kevin leaves for a development position.
(c) 2002 Bret Pettichord

10

A Fable: Final Episode


Ahmed Hardluck is given the testsuite. It takes him a while to figure out how to run the tests. Major product changes break the automation. Most tests fail. Ahmed gets help and the testsuite is repaired. The tests eventually pass and the product ships. Unfortunately, the testsuite ignored errors. Customers are irate. Product is a failure.
(c) 2002 Bret Pettichord

11

A Fable: Some Problems


Spare-time test automation Lack of clear goals Lack of experience Testing the wrong stuff High turnover Reaction to desperation Reluctance to think about testing Technology focus Working in isolation
(c) 2002 Bret Pettichord

12

The Rules of Software Development


Define requirements Manage source code, test data, tools Design before coding Review test automation code Test the code Track automation bugs Document for other users Establish milestones and deliverables Dont be a level-zero organization
(c) 2002 Bret Pettichord

13

Seven Steps to Test Automation Success


1. 2. 3. 4. 5. 6. 7.
Follow the Rules of Software Development Improve the Testing Process Define Requirements Prove the Concept Champion Product Testability Design for Sustainability Plan for Deployment
14

(c) 2002 Bret Pettichord

The Capture Replay Idea


Regression testing is repetitive, boring, mindless and repetitive. Therefore it should be easy to automate. Capture the testers events while testing. Replay the same events on a later build. If anything different happens, it must be a bug. No programming required!
(c) 2002 Bret Pettichord

15

Birth of capture replay tools


[Does not print]

(c) 2002 Bret Pettichord

16

Capture Replay: Methodological Problems


Must specify tolerances and scope for what counts as the same. Must prepare for user interface changes. Must be able to work in different configurations and environments. Must track state of software under test. Hard-coded data limits reuse. Solving these problems requires programming!
(c) 2002 Bret Pettichord

17

How can Capture Replay tools be so smart?


They must instrument the system so that they can interrogate controls and intercept and insert user events.
newer
n n n n

older

Insert hooks into the operating system. Replace browser DLLs. Replace operating system DLLs. Supplant shared libraries by changing the load path. Provide their own instrumented window manager. Directly instrument application event loop. 18

(c) 2002 Bret Pettichord

Capture Replay: Modify the System?


Modifying system libraries only adds another element of instability. To do this, the test tool engineers must reverse engineer the system being instrumented. They often have to use undocumented and unsupported interfaces, so minor configuration changes can break the test tool. Recording technology is the most sensitive.
(c) 2002 Bret Pettichord

19

Capture Replay: Using the Latest Technology?


Your developers incorporate the latest version in your software. You get to test it. The test tool developers also get the latest version. They reverse engineer it so that they can support it in the next release. Test tools are always playing catch up. If they are behind, you will have to debug the problems yourself.
(c) 2002 Bret Pettichord

20

Capture Replay: Technical Problems


Tools are constantly playing catch up with new technologies. Instrumented systems tend to be more unreliable, especially with configuration variations.
n

Recording technology is the most sensitive.

Some subtle system bugs can seriously confuse test tools. Tools require special customization to support custom controls.
(c) 2002 Bret Pettichord

21

Capture Replay: Custom Controls


Custom controls are non-standard controls that a tool cant support out of the box. GUI test tool experts must customize the test tools to provide the necessary support. A control may or may not be custom depending on the tool you are using.
(c) 2002 Bret Pettichord

Examples:
n

n n n n n

Grids with embedded drop down lists Treeviews Delphi lists 3270 emulator Powerbuilder Icons

22

Capture Replay: Supporting Custom Controls


Mapping to known controls Operating using key events Computing mouse events Using optical character recognition Calling internal APIs Faking it by accessing external data Peeking at the clipboard
(c) 2002 Bret Pettichord

23

Capture Replay: A Pattern without a Context


When could Capture Replay work?
n n

User interface is defined and frozen early. Programmers use late-market, noncustomized interface technologies. There are very few reports of projects actually using Capture Replay successfully.
24

These situations are rare.


n

(c) 2002 Bret Pettichord

Published criticism of capture replay


[This foil does not print]

(c) 2002 Bret Pettichord

25

What is a Pattern?
A pattern is a solution to a problem in context
n n

That occurs repeatedly That can be tailored to variations in context

Tailorability requires an explanation of


n n

The forces that the pattern resolves The consequences of its application

(c) 2002 Bret Pettichord

26

Test Automation Context


Staff Skills.
n

Find a way to leverage the skills available for testing. Take advantage of all the interfaces of the software under test. Focus on key features or concerns that can benefit from the added power of automated testing.

Mission

Product Architecture.
n

Test Mission.
n

Staff

Product

(c) 2002 Bret Pettichord

27

Tester Skill Variation


User Specialists
n n

Understand user perspective and the problem domain Have experience in the targeted user role Anticipate unwritten requirements Understand the technology and the solution domain Have experience with the technology Anticipate technology challenges

A mix of skills improves testing effectiveness Design your test strategy and automation architecture to allow contributions from all 28

Technical Specialists
n n n

Automation Specialists
Understand testing technology n Have experience with test tools (c) 2002 Bret Pettichord automation needs n Anticipate
n

Staffing Models
User Experts + Tools User Experts + Automation Experts Junior Programmers Tester/Programmers Test Expert + Warm Bodies Spare-time Automation Central Automation Team
(c) 2002 Bret Pettichord

29

Forming an Automation Team


Treating automation as a development project requires committing staff to the automation as their first priority Do you have user specialists, technical specialists, automation specialists or warm bodies? Do you have someone with experience in automation to lead the project? Do you have product developers who can lend assistance?
(c) 2002 Bret Pettichord

30

Activity: Contextual Questions About People


Answer these questions for your current or most recent project.
Source: LAWST 2, reprinted in Kaner, Architectures of Test Automation and Avoiding Shelfware

(c) 2002 Bret Pettichord

31

Product Architecture
Hardware and Software Multiple machines Distributed architecture Networking Databases Multiple users, multiple user roles Both GUI and non-GUI interfaces are available for testing Event-driven & multi-threaded
(c) 2002 Bret Pettichord

32

What is Architecture?
Selection of tools, languages and components Decomposition into modules
n n

Standard modules which can be acquired Custom modules which must be built

Distribution of labor and responsibility

(c) 2002 Bret Pettichord

33

Product and Test Architectures


Product Architecture
Determined by system designers Designed to meet the quality requirements of the customer Provides the context for the testing
(c) 2002 Bret Pettichord

Test Automation Architecture


Determined by test designers Designed to meet the quality requirements of the testing Provides support for the testing 34

Activity: Contextual Questions About Test Tools


Answer these questions for your current or most recent project.
Source: LAWST 2, reprinted in Kaner, Architectures of Test Automation and Avoiding Shelfware

(c) 2002 Bret Pettichord

35

Activity: Contextual Questions About Your Product


Answer these questions for your current or most recent project.
Source: LAWST 2, reprinted in Kaner, Architectures of Test Automation and Avoiding Shelfware

(c) 2002 Bret Pettichord

36

Contextual questions to ask about your product


[does not print]

(c) 2002 Bret Pettichord

37

Test Mission
What is your test mission?
n n

What kind of bugs are you looking for? What concerns are you addressing? Find important bugs fast Verify key features Keep up with development Assess software stability, concurrency, scalability Provide service 38

Possible missions
n n n n n

Make automation serve your mission. Expect your mission to change. (c) 2002 Bret Pettichord

Two Focuses
Efficiency
n n

Service
n n

n n n

Reduce testing costs Reduce time spent in the testing phase Improve test coverage Make testers look good Reduce impact on the bottom line

n n

Tighten build cycles Enable refactoring and other risky practices Prevent destabilization Make developers look good Increase management confidence in the product

Automation projects with a service focus are more successful


(c) 2002 Bret Pettichord

39

Test Strategy
Opportunities for test automation 1 Specify Tests
1. 2.

Software Setup (next slide) Test creation


w Test inputs w Expected results w Test selection

Execute Tests

3.

Test execution
w External interfaces w Internal interfaces

4.

Results evaluation
w Consulting oracles w Comparing baselines

Verify Results

Automating execution can leave lots of manual work remaining 40

(c) 2002 Bret Pettichord

Test Setup
Software testing usually requires lots of set up activities in preparation for testing
n n n n

Installing Product Software Configuring Operating Systems Initializing Databases Loading Test Data

Many of these activities can be automated. System administration tools are often useful and cost-effective.
(c) 2002 Bret Pettichord

41

Activity: Contextual Questions About Your Mission


Answer these questions for your current or most recent project.
Source: LAWST 2, reprinted in Kaner, Architectures of Test Automation and Avoiding Shelfware

(c) 2002 Bret Pettichord

42

Contextual questions to ask about your mission


[does not print]

(c) 2002 Bret Pettichord

43

Quality Attributes

Introduction Quality Attributes Architectural Patterns Are You Ready to Automate? Concluding Themes

Maintainability, Reviewability, Dependability, Reusability

(c) 2002 Bret Pettichord

44

Essential Capabilities
Automation is replacing what works with something that almost works, but is faster and cheaper Professor Roger Needham What trade-offs are we willing to make? Functional requirements for test automation
n

The tests that need to be executed (and reexecuted)

Quality (non-functional) requirements for test automation


The requirements that result from the fact that we are automating 45 (c) 2002 Bret Pettichord
n

Maintainability
Will the tests still run after product design changes? Will tests for 1.0 work with 2.0? Can tests be easily updated for 2.0? Will tests fail because of changes to the output format?

(c) 2002 Bret Pettichord

46

Maintainability
[does not print]

(c) 2002 Bret Pettichord

47

User Interfaces Change


Your GUI test automation will likely break. What can you do? Prevent developers from changing them Design your automation so it is adaptable Test via non-user interfaces

(c) 2002 Bret Pettichord

48

Reviewability
Can others review the test scripts and understand what is being covered? Are the test scripts documented? Can we make sure that that the automated script matches the original test design? What kind of coverage does the test suite have? How can we know? Is the test testing the right stuff?
(c) 2002 Bret Pettichord

49

Repeatability
Will your test do the exact same thing every time? Is random data embedded in your tests? Do your tests modify objects in a way that prevents them from being re-run?

(c) 2002 Bret Pettichord

50

Integrity
Can your test results be trusted? Do you get lots of false alarms? Are you sure that failed tests always appear in the test results? Is it possible for tests to be inadvertently skipped? A Basic Principle Automated tests must fail if the product under test is not installed
(c) 2002 Bret Pettichord

51

Reliability
Will the test suite actually run? Will tests abort? Can you rely on the test suite to actually do some testing when you really need it? Will it run on all the platforms and configurations you need to test?
(c) 2002 Bret Pettichord

52

Dependability
[does not print]

(c) 2002 Bret Pettichord

53

Reusability
To what degree can testing assets be reused to create more, different tests? This goes beyond mere repetition. Can you amass a collection of data, procedures, mappings and models that can be reused in new ways to make more testing happen?
(c) 2002 Bret Pettichord

54

Independence
Can your tests be run individually or only as part of a suite? Can developers use them to reproduce defects? Will your tests run correctly if previous tests fail? Will one failure cause all succeeding tests to fail?
(c) 2002 Bret Pettichord

55

Performance
Rarely is it worth optimizing test automation code for performance. Supporting independence and repeatability can impact performance. But performance improvements can complicate tests, reduce reliability and may even damage integrity.
(c) 2002 Bret Pettichord

56

Performance Example
[does not print]

(c) 2002 Bret Pettichord

57

Simplicity
Things should be a simple as possible, but no simpler Einstein Complexity is the bugbear of test automation You will need to test your test automation, but you are likely to have few resources for this Therefore your architecture must be as simple and perspicacious as possible
(c) 2002 Bret Pettichord

58

Quality Attributes
Maintainability Reviewability Repeatability Integrity Reliability Reusability Independence Performance Simplicity
(c) 2002 Bret Pettichord

59

Quality Attribute Assessment


[does not print]

(c) 2002 Bret Pettichord

60

Quality Attributes for Architectural Patterns


We will assess each architecture pattern in terms of
n n n

Maintainability Reviewability Dependability


w Reliability & Integrity

Reusability
61

(c) 2002 Bret Pettichord

Architectural Patterns
Frameworks for developing automated tests

Introduction Quality Attributes Architectural Patterns Are You Ready to Automate? Concluding Themes

(c) 2002 Bret Pettichord

62

Test Automation Architecture


Generally, software architecture is:
n

Our approach:
n

Context
w People w Product w Mission

Selection of tools, languages and components Decomposition into modules


w Standard modules which can be acquired w Custom modules which must be built

Test strategy
w Test creation w Test execution w Test evaluation

Quality Attributes
w w w w

Distribution of labor and responsibility

Maintainability Reviewability Dependability Reusability

(c) 2002 Bret Pettichord

63

Scripting Framework
You want to create a lot of tests with out building a lot of custom support code. Therefore, use a generalized scripting framework. Extend it as needed for your project. Most commercial GUI test tools are scripting frameworks with GUI drivers.
(c) 2002 Bret Pettichord

64

Scripting Framework: Example


[-] testcase response_notice_group () [ ] Desktop.SetActive () [ ] Desktop.IconView.DoubleClick ("Notices") [ ] ReadNotices.Update.Click () // update [ ] ReadNotices.ListBox.Select ("Sentry*") [ ] ReadNotices.CatchUp.Click () [ ] ReadNotices.Close.Click () [ ] Desktop.IconView.DoubleClick ("sentry-region") [ ] PolicyRegion.IconView.DoubleClick ("BVT_DM") [ ] ProfileManager.Profiles.DoubleClick ("Sentry_BVT") [ ] DistributedMonitoring.AddMonitor.Click () [ ] AddMonitor.Collections.Select ("Universal") [ ] AddMonitor.Sources.Select ("Swap space available") [ ] AddMonitor.AddEmpty.Click () [ ] EditMonitor.ResponseLevel.Select ("always") [ ] EditMonitor.SetMonitoringSchedule.Click () [ ] SetMonitoringSchedule.CheckEvery.SetText ("1") [ ] SetMonitoringSchedule.TimeUnits.Select ("minutes") [ ] SetMonitoringSchedule.ChangeClose.Click () [ ] EditMonitor.SendTivoliNotice.Click () [ ] EditMonitor.NoticeGroupList.Select ("Sentry") [ ] EditMonitor.ChangeClose.Click () [ ] DistributedMonitoring.Profile.Save.Pick () [ ] DistributedMonitoring.Profile.Distribute.Pick () [ ] DistributeProfile.DistributeClose.Click () [ ] DistributedMonitoring.Profile.Close.Pick () [ ] sleep (80) [ ] Desktop.SetActive () [ ] Desktop.IconView.DoubleClick ("Notices") [ ] ReadNotices.ListBox.Select ("Sentry (* unread)")

Desktop.SetActive () Desktop.IconView.DoubleClick ("Notices") ReadNotices.Update.Click () // update ReadNotices.ListBox.Select ("Sentry*") ReadNotices.CatchUp.Click () ReadNotices.Close.Click ()

(c) 2002 Bret Pettichord

65

Scripting Framework: Context


People
n n

Mission
n

Tester/Programmers Test Tool Specialists Can automate GUI testing Frameworks also support API and unit testing

Product
n

Automate tests that will last the life of the product Automating tests that are defined in advance Provide maximum flexibility of approach

(c) 2002 Bret Pettichord

66

Scripting Framework: Basic Components


Scripting Language
n n

Test Harness
n

Often proprietary Standard languages are preferred Written in the scripting language May contain calls to library functions

Test Scripts
n

Executes tests and collects test results Optional support for preconditions and postconditions (setup and teardown)

(c) 2002 Bret Pettichord

67

Scripting Framework: Test Harness


Necessary Capabilities
n n n

Optional Capabilities
n

Run the test scripts Collect test verdicts Report test results

(c) 2002 Bret Pettichord

Check test preconditions (abort or correct if not met) Allow selected subsets of tests to run Distribute test execution across multiple machines Distinguished between known failures and new failures Allow remote execution and monitoring Use Error Recovery System (later)

68

Scripting Framework: Examples


Characterbased testing Unit testing Commandline testing

Interface Driver Language Test Harness


(c) 2002 Bret Pettichord

Expect TCL DejaGNU

N/A Source language Xunit, Junit

N/A Perl, Shell TET

These frameworks are in the freely available. 69

Scripting Framework: GUI Support Components


Widget Support Library
n

Recorders
n

Driver functions for Graphical User Interfaces Identify user interface components using specified qualifiers Insert events into the input stream aimed at those components Requires customization to support custom controls.

Action Recorders
w Record user actions as scripts

Object Recorders
w Report control identifies by class and properties w Assist hand-coding w Spy

Recorders are useful for creating tests 70

(c) 2002 Bret Pettichord

Scripting Framework: Test Strategy


Test Creation
n

Test Evaluation
n

Tests are hand-coded, using capture replay when possible. Product must be available before writing tests. The test harness and scripting language provide an execution environment for the tests.

Test Execution
n

Expected outcomes are hand-coded when tests are created, or Framework supports capturing a baseline for later comparison when test is first run

(c) 2002 Bret Pettichord

71

Using Scripting Framework for GUI Testing


Verify support for custom controls. Customize as necessary. Encapsulate any special tricks. Determine best strategy for window maps. Create test scripts in the scripting language, using recorders for assistance. Use cut and paste to create test variants. Avoid placing re-used code in libraries. (see next slide) Build or customize recovery system to handle known error messages. Build tests to verify recovery system.
(c) 2002 Bret Pettichord

72

Cut and Paste: True Evil?


Common Wisdom
n n n

Avoid using cut and paste Repeated code smells bad Instead place in functions Tests by their nature use lots of repetition Tests are easier to review if dont include distracting functional calls and conditional statements Tests become less reliable with added complexity

The Testing Exception


n n

Therefore, use cut and paste when it makes tests easier to understand and modify.
(c) 2002 Bret Pettichord

73

Scripting Framework: Considerations


When might you use it?
n

Pilot projects. It serves as a foundation for more complicated architectures. Minimum complexity that provides reasonable flexibility and robustness. User interface is very stable, or product has short life-span. Domain specialists can program, or tests are wellspecified. 74

When might it be enough?


n

(c) 2002 Bret Pettichord

Scripting Framework: Quality Attributes


Maintainability
n n

Dependability
n n

Medium Additional design patterns can be used to insulate tests from interface changes. Testers will require discipline to avoid using unmaintainable constructs Low Tests are written in a scripting language that may not be known by many.

Medium Scripting language provides the freedom to write clever, complex, error-prone code. Medium Framework can be the foundation of other architectures.

Reusability
n n

Reviewability
n n

(c) 2002 Bret Pettichord

75

Error Recovery System


It resets the system when a test fails.
n

Without a recovery system, test suites are prone to cascading failures. (Domino effect) Close extraneous application windows Shutdown and restart the product Reboot the hardware Reset test files Reinitialize the database Log the error

A recovery system may


n n n n n n

A recovery system returns the product to a base state.


n

Tests need to start from this base state.

(c) 2002 Bret Pettichord

76

Data-Driven Scripts
Good testing practice encourages placing test data in tables. Hard-coding data in test scripts makes them hard to review and invites error and omissions. Therefore write bridge code to allow test scripts to directly read test parameters from tables.
(c) 2002 Bret Pettichord

77

Data-Driven Testing & Make Test Data Convenient


[does not print]

(c) 2002 Bret Pettichord

78

Data-Driven Scripts: Example


Name mfayad dschmidt rjohnson Password xyz 123 abc Result login expired reject
79

(c) 2002 Bret Pettichord

Data-Driven Scripts: Context


People
n

Tester/Programmers create the test procedure scripts Anyone can create the inputs Any functionality that must be tested with lots of variations.

Mission
n

Execute lots of tests with varying parameters

Product
n

(c) 2002 Bret Pettichord

80

Data-Driven Scripts: Example 2


Caption Graphic (CG) Bounding box width 3 pt. 2 pt. 1 pt. none Caption typeface Caption location

Caption style

CG format

1 2 3 4

Top Right Left Bottom

Times Arial Courier Helvetica

Normal Italic Bold Bold Italic

Yes No No Yes

PCX

Large

TIFF

Medium

CG size

(c) 2002 Bret Pettichord

81

Data-Driven Scripts: Test Strategy


Test Creation
n

Test Execution
n

Anyone can enter test parameters in a spreadsheet Tests may be automatically generated using spreadsheet formulas or external programs Live data may be used from legacy systems

Tests are executed by data-driven scripts Specify expected results with test inputs, or Deliver input parameters with outputs to facilitate manual verification

Test Evaluation
n

(c) 2002 Bret Pettichord

82

Data-Driven Scripts: Components


Data-Driven Scripts
n

Test procedure scripts that execute in a Scripting Framework Stored in a spreadsheet Frame work library allows test script to read spreadsheet data
83

Test Case Parameters


n

Bridge Code
n

(c) 2002 Bret Pettichord

Data-Driven Scripts: Considerations


When might you use this?
n

You intend to run lots of variations of a test procedure Domain specialists are defining tests but are not programmers You have automation specialists You would like to reduce dependency on specific test tool

When might this be enough?


n

Your tests vary in terms of data rather than procedure

(c) 2002 Bret Pettichord

84

Data-Driven Scripts: Quality Attributes


Maintainability
n n

Dependability
n n n

High. Test procedure script can be adapted to interface changes without requiring changes to the data. Medium/High. Test data is easy to review. Test procedure scripts should be double checked with known test data first.

Reviewability.
n n n

Medium. Reviewability helps. Test procedure script must be reviewed to ensure that it is executing as expected. Supporting navigation options can complicate and increase chance of errors. High. Test data may be able to be used with different test procedure scripts.

Reusability
n n

(c) 2002 Bret Pettichord

85

Screen-based Tables
Non-programmers want to create and automate lots of tests. They dont want to work through middlemen, and the screen designs are fixed in advance. Therefore, use tables to specify the windows, controls, actions and data for the tests.
(c) 2002 Bret Pettichord

86

Screen-based Tables: Example


Test Case 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 Window MAINMENU MAINMENU CHT_ACCTS CHT_ACCTS CHT_ACCTS CHT_ACCTS CHT_ACCTS CHT_ACCTS CHT_ACCTS CHT_ACCTS CHT_ACCTS Control MAINMENU CHT_MENU ACCTNO ACCTNO ACCTDESC ACCTDESC STMTTYPE HEADER ACCTTYPE OK MESSAGE Method Value SEL_MENU Chart of Accounts SEL_MENU Enter Accounts ENT_EDIT 100000 PRESSKEY TAB ENT_EDIT Current Assets PRESSKEY TAB PUSH_RB ON CHECKBOX ON LB_ITEM Assets PUSH_PB ON LOOKTEXT Account Added

(c) 2002 Bret Pettichord

87

Screen-Based Tables: Context


People
n

Mission
n

Many nonprogramming testers Dedicated automation staff User interface defined early It wont change late

Product
n n

Test business logic from a user perspective. Allow tests to be reviewed by anybody Avoiding committing to a test tool

(c) 2002 Bret Pettichord

88

Screen-Based Tables: Test Strategy


Test Creation
n

Test Execution
n

User domain specialists specify tests, step-by-step in spreadsheets Tests can be written as soon as the screen design is fixed.

Tests are executed by a dispatcher script running in a framework Expected results are also specified in the test case spreadsheets. 89

Test Evaluation
n

(c) 2002 Bret Pettichord

Screen-Based Tables: Components


Test Tables
n

Bridge Code
n

Screen-based descriptions stored in spreadsheets Test script that reads in test tables line-byline, and executing them.

Allows dispatcher script to access spreadsheet data Defines the names of widgets on a screen and how they can be accessed. 90

Dispatcher
n

Window Maps
n

(c) 2002 Bret Pettichord

Window Maps
Abstraction layer that improves maintainability Provides names and functions for conveniently accessing all controls or widgets Included in the better test tools
n

GUI Map, Window Declarations

Costs to generate depend on tool Using window maps also greatly improve script readability
(c) 2002 Bret Pettichord

91

Window Maps (contd)


[does not print]

(c) 2002 Bret Pettichord

92

Window Map Examples


Login Password

Login intname Password intname Login textbox Password textbox

USERNM PWD #1 #2

A B 93

(c) 2002 Bret Pettichord

Screen-Based Tables: Quality Attributes


Maintainability
n n

Dependability
n n

Medium. Minor User Interface changes can be handled using the Window Maps High. Tests can be reviewed by almost anyone

Reviewability
n n

High. Error handling and logging is isolated to the execution system, which can be optimized for reliability Low. Test format facilitates replacing GUI test tool if the need arises.

Reusability
n n

(c) 2002 Bret Pettichord

94

Action Keywords
You would like easy to read test scripts that can be created by business domain experts who may not be programmers. Therefore define Action Keywords that can appear in spreadsheets yet correspond to user tasks.

(c) 2002 Bret Pettichord

95

Action Keywords: Example


Go To New Address Click On Click On Verify Address Change Address Click On Click On Verify Address AddressBook Smith Done AddressName Smith SmithX Done AddressName SmithX John Smith JohnX 1010 Main StX 512-555-xxxx John Smith John JohnX 1010 Main St 1010 Main StX 512-555-1212 512-555-xxxx John 1010 Main St 512-555-1212

(c) 2002 Bret Pettichord

96

Action Keywords: Context


People
n n

Mission
n

Business domain experts write the test scripts Automation experts write the task libaries and navigation code Typically used with GUI interfaces Also can be used effectively with other interfaces (e.g. telephone)

n n

Product
n n

Support thorough automated testing by business domain experts Facilitate test review Write test scripts before software is available for testing. Create tests that will last.

(c) 2002 Bret Pettichord

97

Action Keywords: Components


Keyword-based tests
n

Task library
n n

Tests in spreadsheet format Supported keywords and arguments Mapped to task library functions

Keyword definitions
n

Functions that executed the tasks Written in the scripting language Parses the spreadsheet data and executes the corresponding library function Allows dispatcher script to read spreadsheet data

Dispatcher
n

Bridge code
n

(c) 2002 Bret Pettichord

98

Action Keywords: Test Strategy


Test Creation
n

Test Execution
n

Business domain specialists create tests in spreadsheets. Automation specialists create keywords and task libraries. Tests can be created early.

Tests are executed using a dispatcher and framework. Expected results are defined as verification keywords when tests are authored. 99

Test Evaluation
n

(c) 2002 Bret Pettichord

Action Keywords
Test Cases

Sample Architecture

Bridge Code Dispatch Control Error Recovery Task Library

Window Maps Support for User Interface Custom Controls Driver Custom Testing Hooks User Interface Components

(c) 2002 Bret Pettichord

100

Action Keywords
When might you use this?
n

What are the risks?


n n n

Non-technical domain specialists Tests will be used for a long time Wish to write tests early Expect user interface changes

Complexity Cost Dispatchers and task libraries must be tested

(c) 2002 Bret Pettichord

101

Action Keywords: Quality Attributes


Maintainability
n n

Dependability
n

High Only the task libraries need to be updated when user interfaces change High Test format is easy to understand

Medium. It really depends on how well the dispatcher and task functions are engineered Medium. Tasks can be reused for many tests. 102

Reviewability
n n

Reusability
n n

(c) 2002 Bret Pettichord

Comparison of Spreadsheet-based Architectures


Data-Driven Scripts Rows in the test files Columns in the test files Screen-Based Tables Action Keywords One per action One per test case One per step

Column semantics defined by each procedure script for all rows One per test procedure

Predefined as window, object, action, value

Action word determines semantics for the row

Control Script

General purpose. General purpose. For many test For many test procedures procedures

(c) 2002 Bret Pettichord

103

Task Libraries
Writing good libraries is never simple Two forces make test libraries a particular challenge
n

Test variations present ample opportunities for premature generalization Test libraries are less likely to be tested themselves

(c) 2002 Bret Pettichord

104

Task Libraries: Principles


Therefore use test libraries as a way to make test scripts easier to understand rather than being concerned with reducing lines of code
n n n

Focus on grouping tasks in users terms Document start and end states Only create functions that will be used dozens of times Write tests for your libraries

There is nothing wrong with open coding


(c) 2002 Bret Pettichord

105

Task Libraries: Task Definition


Tasks are common sequences of actions that appear repeatedly in tests They may take place on a single screen or span a couple of screens, but they usually do involve multiple widgets They closely map to manual procedures in terms of:
n n

detail and specificity terminology

They must note and verify start and end states Tasks may require more verification than typically appear in manual test descriptions
(c) 2002 Bret Pettichord

106

Test Library Guidelines


[does not print]

(c) 2002 Bret Pettichord

107

Task Libraries: Examples


CreateAccount (Name, Address, Credit Limit)
n

Creates a customer account and returns to the main screen. Adds specified product to the order sheet Complete the existing order using the specified credit card and address.

OrderItem (ProductID, Quantity)


n

CompleteOrder (CreditCard, Address)


n

(c) 2002 Bret Pettichord

108

Types of Library Functions


Task Libraries
n

Encapsulate common user tasks. Facilitate navigating through the elements of the user interface Check user interface elements against expected results Will automatically log errors when appropriate Support preconditions and postconditions Error Recovery System is an example

Navigation Libraries
n

Verification Libraries
n n

Test Harness Libraries


n n

(c) 2002 Bret Pettichord

109

Dont build test libraries simply to avoid repeating code

(c) 2002 Bret Pettichord

110

Task Libraries: Contexts


When might you use them?
n

What are the risks?


n

User interface is expected to change Lots of tests will be automated

n n

Library design integrity must be adhered to Larger up-front costs Increased complexity

(c) 2002 Bret Pettichord

111

Levels of Testing: System Testing


System testing tests a complete software system from a user perspective. The most popular kind of system testing is GUI testing.
System Testing Component Testing

Unit Testing

(c) 2002 Bret Pettichord

112

Levels of Testing: Unit Testing


Units are functions, methods or other lexical collections of code Units are in a specific language Unit testing is usually done in the same language as that being tested Unit testing is usually done by programmers, sometimes the same ones who wrote the code under test
(c) 2002 Bret Pettichord

113

Levels of Testing: API and Unit Testing Comparison


Unit Testing
Typically private Public vs Private Same language as Test Languages product

API Testing
Typically public Often different language from product APIs must be created

Effort to Expose Whats Tested


(c) 2002 Bret Pettichord

Exposed automatically

Applies to units large Typically and small provided for large units only 114

Levels of Testing: Unit Integration Testing


What is a unit?
n n

A class, function or procedure Typically the work product of a single developer

Unit isolation testing Test each unit in isolation Unit integration testing Test units in context

Create stubs or simulators for units being depended on Allow calls to classes, functions and components that the unit requires

Expensive and often difficult

Typically cheaper and easier

(c) 2002 Bret Pettichord

115

Test-First Programming
You want to be sure that the code works as soon as it is written. And you want to have regression tests that will facilitate reworking code later without worry. Therefore use Test-First Programming, a technique that uses testing to structure and motivate code development.
n

Many programmers believe that Test-First Programmer results in better design. 116

(c) 2002 Bret Pettichord

Test-First Programming: Context


People
n

Mission
n

Programmers use this technique when they develop code Typically used on products using iterative development methodologies

Product
n

Test code before it is checked in All code must have automated tests

(c) 2002 Bret Pettichord

117

Test-First Programming: Test Strategy


Test Creation
n

Write a test for anything that could possibly fail Programmers create test, write some code, write another test Tests are written in the same language as the product code.

Test Execution
n

Tests are executed using a unit testing framework Expected results are specified when the tests are written

Test Evaluation
n

(c) 2002 Bret Pettichord

118

Test-First Programming: Components


Unit Test Framework
n

Tests
n

Several are in the public domain

Written in the same language as the product code

(c) 2002 Bret Pettichord

119

Test-First Programming: Quality Attributes


Maintainability
n n

Dependability
n n

Medium. Tests are maintained with the code. Medium. Can be reviewed by other programmers.

Reviewability
n n

Medium/High. Tests are run before the code is written. This helps to test the tests. Low

Reusability
n

(c) 2002 Bret Pettichord

120

API Tests
User interfaces tests are often tricky to automate and hard to maintain. Therefore, use or expose programming interfaces to access the functionality that needs to be tested.

(c) 2002 Bret Pettichord

121

API Tests: Example Interfaces


Interfaces may be provided specifically for testing.
n n

Existing interfaces may be able to support significant testing.


n n n n n

Excel Xconq

Any interface is easier to automate than a GUI.


(c) 2002 Bret Pettichord

InstallShield Autocad Interleaf Tivoli Any defined client/server interface 122

Hidden test interfaces


[does not print]

(c) 2002 Bret Pettichord

123

API Tests: Context


People
n n

Mission
n

Programmer/Testers write the tests. Good cooperation between testers and product programmers. Any product with some kind of programming interface Any product that can have an interface added

Find an effective way to write powerful automated tests. Testing starts early.

Product
n

(c) 2002 Bret Pettichord

124

API Tests: Uncovering APIs


Does your product have an API?
n

Ask? It might be undocumented

Client/Server protocol interfaces may be available. APIs or command line interfaces may be available Diagnostic interfaces may also be available.
(c) 2002 Bret Pettichord

125

API Tests: Building APIs


Request test interfaces.
n

You may be surprised and get what you ask for.

It may be cheaper to create or expose one for testing than to build GUI test automation infrastructure

(c) 2002 Bret Pettichord

126

API Tests: Test Strategy


Test Creation
n

Test Evaluation
n

Tests are written in a scripting language. Tests are executed in a scripting framework or using a programming language.

Test Execution
n

Expected results are specified when tests are written

(c) 2002 Bret Pettichord

127

API Tests: Quality Attributes


Maintainability
n n

High. APIs tend to be more stable than GUIs Medium/High. Tests are written in a standard scripting language.

Dependability
n n

Reviewability
n n

Medium. The ability to write flawed tests is unimpeded. Low.

Reusability
n

(c) 2002 Bret Pettichord

128

Levels of Testing: Comparison of Approaches


Unit Who can do it? Developers API Programmer/ users GUI Testers + automation experts

Tool support required Training required Flexibility

Unit test Scripting test GUI test tools harnesses harnesses (purchase) (public domain) (public domain) Tests are in same language as product Need to understand API Requires tool training

Anyone can run Anyone can run Must have tool license to run

(c) 2002 Bret Pettichord

129

Thin GUI
You want to test the user interface without using a GUI test tool. Therefore, design the GUI as a thin layer of presentation code atop the business logic. Use unit or API test techniques to test the business logic layer.
(c) 2002 Bret Pettichord

130

Comparing Two Kinds of Test Interfaces


Product Under Test Test Automation Libraries User Interface Presentation Code Domain Code Programming Interface Programming Interface

Which interface will be easier to test?


(c) 2002 Bret Pettichord

131

Thin GUI: Context


People
n

Mission
n

Programmer/Testers Product presentation layer is developed as a thin layer atop the business logic code.

Product
n

Automate user interface tests using existing unit test framework

(c) 2002 Bret Pettichord

132

Thin GUI: Test Strategy


Test Creation
n

Test Evaluation
n

Tests are created as unit tests. Tests are executed in the unit test framework.

Expected results are defined in the tests.

Test Execution
n

(c) 2002 Bret Pettichord

133

Thin GUI: Quality Attibutes


Maintainability
n n

Dependability
n n

High. Splitting the GUI separates tests from interface changes. Technical issues, however, may interfere. Medium. Other programmers can review. No special tool language used.

Medium. Note that some traditional testing of the GUI is still required. Medium. The unit test framework is being reused.

Reusability
n n

Reviewability
n n

(c) 2002 Bret Pettichord

134

Consult an Oracle
You want to evaluate lots of tests. Therefore, Consult an Oracle. An oracle is a reference program that computes correct results.

(c) 2002 Bret Pettichord

135

Consult an Oracle: Examples


A calculator is tested by randomly generating inputs. The resulting outputs are compared to the results from Mathematica. Changes to a business system are tested by randomly generating inputs and then comparing the results to those from a previous version.
(c) 2002 Bret Pettichord

136

Consult an Oracle: Context


People
n

Mission
n

Whoever A suitable oracle must already exist

Test thoroughly

Product
n

(c) 2002 Bret Pettichord

137

Consult an Oracle: Test Strategy


Test Creation
n

Test Evaluation
n

Typically large numbers of tests are generated randomly A testing framework sends the test inputs to both the system under test and the oracle.

Test Execution
n

The results are compared. Typically rules must be defined regarding the domain in which the oracle is considered authoritative and the degree of accuracy that is acceptable. 138

(c) 2002 Bret Pettichord

Consult an Oracle: Five Types of Oracles


None
n n

Self Verifying
n

Doesnt check correctness of results Just makes sure the program doesnt crash Independent generation of results Often expensive Compares results from different runs/versions. Gold files

True
n n n

Inputs indicate the correct result Correct result is computed when data is generated Only checks some characteristics of results Often very useful

Heuristic

Consistency
n n

(c) 2002 Bret Pettichord

139

Consult an Oracle: Quality Attributes


Maintainability
n

Dependability
n n

High High If you save the inputs and outputs, anyone can double check them.

Reviewability
n n

Varies. Mostly this depends on the dependability of the oracle. Dont forget to test your framework and accuracy calculations by seeding errors. High.

Reusability
n

(c) 2002 Bret Pettichord

140

Automated Monkey
Users will invariably try more input sequences than you could ever think up in the test lab. Interaction problems, in which a feature only fails after a previous action triggered a hidden fault, are hard to find. Therefore, develop an Automated Monkey. This is a state model of the product under test that can generate lots of test sequences.
141

(c) 2002 Bret Pettichord

Automated Monkey: Example


A state model of a webbased ordering system. A test generated from this model
Home

Home

Add Account

Home Add Order

Add Order

Add Account

Add Order

A small part of our model of an ordering system.


142

(c) 2002 Bret Pettichord

Automated Monkey: Example State Table


Start State AccountAdministration AccountAdministration AccountProfile AccountProfile AccountProfile AccountProfile AccountProfile Accounts Accounts Accounts Accounts Accounts Accounts Accounts Accounts AccountsEmpty AccountsEmpty Transition AccountAdministration.CreateNewAccount.Click() AccountAdministration.AgentAccounts.Click() AccountProfile.ReturnHome.Click() AccountProfile.LogOut.Click() AccountProfile.Tasks_Bottom.Click() AccountProfile.NewSolution_Bottom.Click() AccountProfile.Administration_Bottom.Click() Accounts.ReturnHome.Click() Accounts.LogOut.Click() Accounts.Tasks_Bottom.Click() Accounts.NewSolution_Bottom.Click() Accounts.Administration_Bottom.Click() Accounts.Sites.Click() Accounts.CreateNewAccountTop.Click() Accounts.ShowAll1.Click() AccountsEmpty.ReturnHome.Click() AccountsEmpty.LogOut.Click() End State CreateAccountAdmin Administration AgentHome MainHome Tasks NewSolution Administration AgentHome MainHome Tasks NewSolution Administration AccountSites CreateAccount AccountsShowAll AgentHome MainHome

The beginning of a state table that lists all transitions.


(c) 2002 Bret Pettichord

143

Automated Monkey: Example Test


Start State MainHome AgentHome NewSolution AgentHome Accounts AccountSites NewSolution Administration Accounts AccountSites Transition MainHome.Login() AgentHome.NewSolution_Center.Click() NewSolution.ReturnHome.Click() AgentHomeAccounts_Center.Click() Accounts.Sites.Click() AccountSites.NewSolution_Tab.Click() NewSolution.Administration_Tab.Click() Administration.Accounts_Tab.Click() Accounts.Sites.Click() AccountSites.ReturnHome.Click() End State AgentHome New Solution AgentHome Accounts AccountSites NewSolution Administration Accounts AccountSites AgentHome

(c) 2002 Bret Pettichord

A test generated from the state table, ready to be executed.

144

Automated Monkey: Test Strategy


Test Creation
n

Test Execution
n

Define a state model that corresponds to part of the product Use algorithms to generate test paths through the state model.

Execute the test paths against the product. As a practical matter, this needs to be automated. Verify that the product is in the correct state for each transition.

Test Evaluation
n

(c) 2002 Bret Pettichord

145

Automated Monkey: Architecture


State Model
n

Test Paths
n

A state model consists of nodes, which are the states, and edges, which are the transitions between states. The simplest algorithm simply picks a random transition from each node. (Random Walk) Mathematical graph theory provides several algorithms that can be used to ensure specific levels of coverage.

Generation Algorithm
n

A test path is a chain of transitions that traverses the model. Each transition is an action and each node on the chain is a state. This script executes the test path. A verification method is executed for each node.

Execution Engine
n

(c) 2002 Bret Pettichord

146

Automated Monkey: Context


People
n

Mission
n

Programmer/Testers with some mathematical skill Many

Thoroughly test functionality that must work correctly

Product
n

(c) 2002 Bret Pettichord

147

Automated Monkey: Quality Attributes


Maintainability
n

Dependability
n n

Varies Low/Medium State models may be hard to review It helps if tests are generated in a reviewable form

Reviewability
n n

Varies. Depends on the state verification functions. (Instrumentation may be required.) Varies

Reusability
n

(c) 2002 Bret Pettichord

148

Assertions and Diagnostics


Delayed-fuse bugs are hard to detect. These create bad data or invalid states, but it may take further testing before the problem becomes obvious. Therefore, add Assertions and Diagnostics to the product code.
n

Assertions are statements of invariants: when these fail, youve found a bug. Diagnostics are warnings: further analysis is required. 149

(c) 2002 Bret Pettichord

Assertions and Diagnostics: Example


Digital PBX Internal errors are logged. (Assertions) Diagnostic messages report events that are expected to happen infrequently. Tests are generated with a real-time simulator. Logs are inspected to find errors.
(c) 2002 Bret Pettichord

150

Assertions and Diagnostics: Context


People
n n

Mission
n

Tester/Programmers Programmer/Testers Many. Many standard components have diagnostic interfaces built in.

Test software thoroughly.

Product
n n

(c) 2002 Bret Pettichord

151

Assertions and Diagnostics: More Examples


n

Assertions. These logical statements in the code make assumptions explicit. If false, there must be a bug. Typically assertion checking is only done during testing and debugging. Database Integrity Checks. A program checks the referential integrity of a database, reporting errors found. Code Integrity Checks. Compute a checksum to see whether code has been overwritten. Memory Integrity Checks. Modify memory allocation to make wild pointers more likely to cross application memory allocations and trigger memory faults. Fault Insertion. Allow error handling code to be triggered without having to actually create the error conditions (e.g. bad media, disk full) Resource Monitoring. Allow configuration parameters, memory usage and other internal information to be viewed.

(c) 2002 Bret Pettichord

152

Assertions and Diagnostics: Test Strategy


Test Creation
n n

Test Evaluation
n

Varies. Tests can be created using Automated Monkey. Manual Exploratory testing is also supported. Varies. May require debug version of software.

Test Execution
n n

Assertions report that errors occurred. Diagnostics must be analyzed, either to help debug assertion-errors or to suggest problems lying in wait.

(c) 2002 Bret Pettichord

153

Assertions and Diagnostics: Quality Attributes


Maintainability
n n

Dependability
n n

Medium/High. Assertions and Diagnostics should be revised as the code is changed. Medium. This makes the code execution easier to understand. Diagnostics may help provide information regarding test coverage.

Reviewability
n n

Medium. Depends on how well Assertions and Diagnostics have been added. High. Any test can use them.

Reusability
n n

(c) 2002 Bret Pettichord

154

Quick and Dirty


You want results fast and you have no time for architecture. Therefore, just do what you can:
n

n n

Focus on smoke tests, configuration tests, tests of variations, and endurance tests. Plan to throw away code. In the process, learn about your tools and the possibilities for automation.

Plan to throw one away; you will anyhow. -Fred Brooks


(c) 2002 Bret Pettichord

155

Quick and Dirty: Opportunities


Platform Setup and Reset Pre-load Database with Testbed Data Smoke Tests Regression Testing Configuration and Multiplatform Testing Load Testing Randomized Testing
(c) 2002 Bret Pettichord

Code Coverage Measurement Memory Leak and other specialized testing Test compliance with interface standards Collect performance metrics Confirm pre-defined release criteria 156

Quick and Dirty: Context


People
n

Tester/Programmers Any Automate tests that will pay back quickly for the time invested in creating them.
157

Product
n

Mission
n

(c) 2002 Bret Pettichord

Quick and Dirty: Test Strategy


Test Creation
n n

Test Execution
n

Hand coding Capture replay (if it works) Nothing fancy

Yes! Manual verification of results is OK.

Test Evaluation
n

(c) 2002 Bret Pettichord

158

Quick and Dirty: Quality Attributes


Maintainability
n

Dependability
n n

Low/None. Low

Reviewability
n

Varies. Youre really depending on the people who create and run the tests. Low

Reusability
n

(c) 2002 Bret Pettichord

159

[Keeping it Simple]

Scripting Frameworks

Architecture Patterns

[User Interface Abstraction]

Data-Driven Scripts Screen-Based Tables Action Keywords


[Alternate Interfaces]

Test-First Programming API Tests Thin GUI


[Verdict Focus]

Consult an Oracle Automated Monkey Assertions and Diagnostics


[Keeping it Simple (Stupid)]
(c) 2002 Bret Pettichord

Quick and Dirty

160

Are You Ready to Automate?


Surveying Test Automation Objectives
Close your workbooks

Introduction Quality Attributes Architectural Patterns Are You Ready to Automate? Concluding Themes

(c) 2002 Bret Pettichord

161

Top 10 Reasons for Automating Tests


1. Manual testing sucks. 2. Tool vendor said Capture replay works. 3. Fun to watch the dialogs popping on the screen. 4. Afterwards we can fire all those pesky testers. 5. Everybody else is doing it.
(c) 2002 Bret Pettichord

6. No humans were harmed in the testing of this software. 7. Big bucks already spent on the test tool. 8. Looks good on the resume. 9. No Testing for Dummies book ... yet. 10. Keep the intern busy. 162

Gradual Test Automation


Test automation benefits from a gradual approach. Build some tests and see how they run before adding complexity. This seminar has presented architectures in way that allows gradual adoption.
(c) 2002 Bret Pettichord

163

Pilot Project
Validate your tools and approach Demonstrate that your investment in automation is well-spent Quickly automate some real tests Get a trial license for any test tools Scale your automation project in steps

(c) 2002 Bret Pettichord

164

Perspectives Differ
Roles
n

Reasons for Automating


n n n n n n n n

n n n n

Development manager Testing manager Developers Testers Automators

Marick: How should developers think about testing?

n n

Speed up testing Allow more frequent testing Reduce manual labor costs Improve test coverage Ensure consistency Simplify testing Define the testing process Make testing more interesting and challenging Develop programming skills Justify cost of the tools Of course well have automation!

(c) 2002 Bret Pettichord

165

Reasons for Automating


n n n n n n n

Speed up testing Allow more frequent testing Reduce manual labor costs Improve test coverage Ensure consistency Simplify testing Just want testing to go away
166

(c) 2002 Bret Pettichord

Reasons for Automating (cont.)


n n

n n n

Define the testing process Make testing more interesting and challenging Develop programming skills Justify cost of the tools Of course well have automation!

(c) 2002 Bret Pettichord

167

Reasons for Automating: Summary


Reasonable
n n n n

Unreasonable
n n n n

High Reuse Speed Development Expand Reach Smooth Development

Simplify Testing Force Organization 100% Automation Justify Tool Purchase

Mixed Bag
n n (c) 2002 Bret Pettichord

Regression Testing Build Skill & Morale 168

Success Critera
What are your success criteria? The automation runs The automation does real testing The automation finds defects The automation saves time What bugs arent you finding while you are working on the automation? What is the goal of testing?
(c) 2002 Bret Pettichord

169

Ready to Automate?
1. 2. 3. 4. 5. 6. 7. 8.
Is automation or testing a label for other problems? Are testers trying to use automation to prove their prowess? Can testability features be added to the product code? Do testers and developers work cooperatively and with mutual respect? Is automation is developed on an iterative basis? Have you defined the requirements and success criteria for automation? Are you open to different concepts of what test automation can mean? Is test automation lead by someone with an understanding of both programming and testing?

(c) 2002 Bret Pettichord

170

Ready to Automate?
[does not print]

(c) 2002 Bret Pettichord

171

Ready to Automate? Scoring


Score 90-100 80-85 70-75 60-65 Ready to Automate Win Over Some Converts Time for More Training Wait and See

55 or less Nevermind
(c) 2002 Bret Pettichord

172

Concluding Themes
What Have We Learned?

Introduction Quality Attributes Architectural Patterns Are You Ready to Automate? Concluding Themes

(c) 2002 Bret Pettichord

173

Keep It Simple
Test Automation tends to complicate testing The tests suite itself will need to be tested Make sure the test suite meets the original goals

(c) 2002 Bret Pettichord

174

Build Flexibly and Incrementally


Build and deliver test automation in stages. Deliver a proof of concept early. Deliver automation updates regularly. Package the automation so that it can be installed easily. Document all dependencies.
(c) 2002 Bret Pettichord

175

Work With Development


Test automation is development. Get help from your development experts (developers). Incorporate automation milestones into the development plan.

(c) 2002 Bret Pettichord

176

Use Multiple Approaches


It is better to use several approaches halfway than to use one perfectly. Test expertise and manual testing are still required. Manual testing is a sanity test for the automation. There will always be need for exploratory, non-repeated testing. Diverse half-measures
(c) 2002 Bret Pettichord

177

Keep Tests Visible


Visibility facilitates review. Review encourages realism about test coverage. Test suites require review and improvement. Dont assume that old features are covered by existing tests. Assess test suite weaknesses and product risks. Are these tests still useful?
(c) 2002 Bret Pettichord

178

Commitment Is Essential
It is easy for test automation to be designated as a side project. It wont get the resources it needs. Commitment ensures that test automation gets the resources, cooperation and attention that it needs.
n n

From development From management


179

(c) 2002 Bret Pettichord

Get an Early Start


An early start makes it more likely you can improve testability and get test APIs Your test strategy will include automation from the start

(c) 2002 Bret Pettichord

180

Activity Next Steps


What are the next things for your project?

(c) 2002 Bret Pettichord

181

Resources
Books, Websites, Consultation

(c) 2002 Bret Pettichord

182

Test Automation Books


Software Test Automation, Fewster & Graham (1999)
n

The first section provides a general overview of test automation with a description of common practices. The second collects accounts from various automators describing their projects. Describes Scripting Framework, Data-driven Scripts, Screen-based Tables, and Action Keywords. A detailed elaboration of Action Keywords. A concise description of Screen-based Tables. Contains a chapter on Automation Monkey by Noel Nyman. Describes API-based testing.

Integrated Test Design and Automation, Buwalda et al (2002)


n

The Automated Testing Handbook, Linda Hayes (1995)


n

Visual Test 6 Bible, Thomas Arnold (1999)


n

Visual Basic for Testers, Mary Sweeney (2001)


n

(c) 2002 Bret Pettichord

183

Software Testing Books


Lessons Learned in Software Testing, Kaner, Bach & Pettichord (2001)
n

Chapter on test automation has been described as more useful than any of the books on test automation. Understand how to customize your testing strategy based on the architecture of your system. Describes how to test effectively when programmers dont follow the rules. Details 23 attacks for uncovering common bugs, including faultinsertion techniques.

Testing Applications on the Web, Nguyen (2001)


n

Testing Computer Software, Kaner, Falk & Nguyen (1993)


n

How to Break Software, Whittaker (2003!)


n

(c) 2002 Bret Pettichord

184

Websites
Reference Point: Test Automation, Pettichord (Readings)
n

Identifies articles, books and websites.

Software Testing Hotlist, Pettichord, testinghotlist.com (Readings)


n

Articles and websites for software testing and test automation.

QA Forums, qaforums.com
Good place to get current tool-specific information. Has boards for all the popular test tools. 185 (c) 2002 Bret Pettichord
n

Free Consultation
As a student in this seminar you are entitled to a free consultation.
n n n

One hour By phone or email Face to face if Im already in town

Send me an email describing your situation. Remind me that you attended this seminar.
n

If you want to talk on the phone let me know good times when you can be reached. bret@pettichord.com, 512-302-3251

Contact
n (c) 2002 Bret Pettichord

186

Bibliography
[does not print]

(c) 2002 Bret Pettichord

187

Das könnte Ihnen auch gefallen