Sie sind auf Seite 1von 86

Principles of Software Testing

Part I Satzinger, Jackson, and Burd

Testing
Testing is a process of identifying defects Develop test cases and test data
A test case is a formal description of
A starting state One or more events to which the software must respond The expected response or ending state Test data is a set of starting states and events used to test a module, group of modules, or entire system

Testing discipline activities

Test types and detected defects

Unit Testing
The process of testing individual methods, classes, or components before they are integrated with other software Two methods for isolated testing of units
Driver Simulates the behavior of a method that sends a message to the method being tested Stub Simulates the behavior of a method that has not yet been written

Integration Testing
Evaluates the behavior of a group of methods or classes
Identifies interface compatibility, unexpected parameter values or state interaction, and runtime exceptions

System test
Integration test of the behavior of an entire system or independent subsystem

Build and smoke test


System test performed daily or several times a week

Usability Testing
Determines whether a method, class, subsystem, or system meets user requirements Performance test
Determines whether a system or subsystem can meet time-based performance criteria Response time specifies the desired or maximum allowable time limit for software responses to queries and updates Throughput specifies the desired or minimum number of queries and transactions that must be processed per minute or hour

User Acceptance Testing


Determines whether the system fulfills user requirements
Involves the end users

Acceptance testing is a very formal activity in most development projects

Who Tests Software?


Programmers
Unit testing Testing buddies can test others programmers code

Users
Usability and acceptance testing Volunteers are frequently used to test beta versions

Quality assurance personnel


All testing types except unit and acceptance Develop test plans and identify needed changes

Part II

Principles of Software Testing for Testers


Module 0: About This Course

Course Objectives
After completing this course, you will be a more knowledgeable software tester. You will be able to better:
Understand and describe the basic concepts of functional (black box) software testing. Identify a number of test styles and techniques and assess their usefulness in your context. Understand the basic application of techniques used to identify useful ideas for tests. Help determine the mission and communicate the status of your testing with the rest of your project team. Characterize a good bug report, peer-review the reports of your colleagues, and improve your own report writing. Understand where key testing concepts apply within the context of the Rational Unified Process.

Course Outline
0 About This Course 1 Software Engineering Practices 2 Core Concepts of Software Testing 3 The RUP Testing Discipline 4 Define Evaluation Mission 5 Test and Evaluate 6 Analyze Test Failure 7 Achieve Acceptable Mission 8 The RUP Workflow As Context

Principles of Software Testing for Testers


Module 1: Software Engineering Practices
(Some things Testers should know about them)

Objectives
Identify some common software development problems. Identify six software engineering practices for addressing common software development problems. Discuss how a software engineering process provides supporting context for software engineering practices.

Symptoms of Software Development Problems


User or business needs not met Requirements churn Modules dont integrate Hard to maintain Late discovery of flaws Poor quality or poor user experience Poor performance under load No coordinated team effort Build-and-release issues

Trace Symptoms to Root Causes


Symptoms
Needs not met Requirements churn

Root Causes
Incorrect requirements Ambiguous communications

Software Engineering Practices


Develop Iteratively Manage Requirements Use Component Architectures Model Visually (UML)

Modules dont fit


Hard to maintain Late discovery Poor quality Poor quality

Brittle architectures
Overwhelming complexity Undetected inconsistencies Insufficient testing

Poor performance
Colliding developers Build-and-release

Subjective assessment
Waterfall development Uncontrolled change Insufficient automation

Continuously Verify Quality


Manage Change

Software Engineering Practices Reinforce Each Other


Software Engineering Practices
Develop Iteratively Manage Requirements Use Component Architectures Model Visually (UML) Continuously Verify Quality Manage Change
Ensures users involved as requirements evolve Validates architectural decisions early on

Addresses complexity of design/ implementation incrementally


Measures quality early and often Evolves baselines incrementally

Principles of Software Testing for Testers


Module 2: Core Concepts of Software Testing

Objectives
Introduce foundation topics of functional testing Provide stakeholder-centric visions of quality and defect Explain test ideas Introduce test matrices

Module 2 Content Outline


Definitions
Defining functional testing Definitions of quality A pragmatic definition of defect Dimensions of quality

Test ideas Test idea catalogs Test matrices

Functional Testing
In this course, we adopt a common, broad current meaning for functional testing. It is
Black box Interested in any externally visible or measurable attributes of the software other than performance.

In functional testing, we think of the program as a collection of functions


We test it in terms of its inputs and outputs.

How Some Experts Have Defined Quality


Fitness for use (Dr. Joseph M. Juran) The totality of features and characteristics of a product that bear on its ability to satisfy a given need (American Society for Quality) Conformance with requirements (Philip Crosby) The total composite product and service characteristics of marketing, engineering, manufacturing and maintenance through which the product and service in use will meet expectations of the customer (Armand V. Feigenbaum)

Note absence of conforms to specifications.

Quality As Satisfiers and Dissatisfiers


Joseph Juran distinguishes between Customer Satisfiers and Dissatisfiers as key dimensions of quality:
Customer Satisfiers the right features adequate instruction Dissatisfiers unreliable hard to use too slow incompatible with the customers equipment

A Working Definition of Quality


Quality is value to some person.

---- Gerald M. Weinberg

Change Requests and Quality


A defect in the eyes of a project stakeholder can include anything about the program that causes the program to have lower value.

Its appropriate to report any aspect of the software that, in your opinion (or in the opinion of a stakeholder whose interests you advocate) causes the program to have lower value.

Dimensions of Quality: FURPS


Usability
e.g., Test application from the perspective of convenience to end-user.

Functionality
e.g., Test the accurate workings of each usage scenario

Reliability
e.g., Test the application behaves consistently and predictably.

Supportability
e.g., Test the ability to maintain and support application under production use

Performance
e.g., Test online response under average and peak loading

A Broader Definition of Dimensions of Quality


Accessibility Capability Compatibility Concurrency Conformance to standards Efficiency Installability and uninstallability Localizability

Maintainability Performance Portability Reliability Scalability Security Supportability Testability Usability

Collectively, these are often called Qualities of Service, Nonfunctional Requirements, Attributes, or simply the -ilities

Test Ideas
A test idea is a brief statement that identifies a test that might be useful. A test idea differs from a test case, in that the test idea contains no specification of the test workings, only the essence of the idea behind the test. Test ideas are generators for test cases: potential test cases are derived from a test ideas list. A key question for the tester or test analyst is which ones are the ones worth trying.

Exercise 2.3: Brainstorm Test Ideas (1/2)


Were about to brainstorm, so lets review Ground Rules for Brainstorming
The goal is to get lots of ideas. You brainstorm together to discover categories of possible tests good ideas that you can refine later. There are more great ideas out there than you think. Dont criticize others contributions. Jokes are OK, and are often valuable. Work later, alone or in a much smaller group, to eliminate redundancy, cut bad ideas, and refine and optimize the specific tests. Often, these meetings have a facilitator (who runs the meeting) and a recorder (who writes good stuff onto flipcharts). These two keep their opinions to themselves.

Exercise 2.3: Brainstorm Test Ideas (2/2)


A field can accept integer values between 20 and 50. What tests should you try?

A Test Ideas List for Integer-Input Tests


Common answers to the exercise would include:
Test 20 19 Why its interesting Smallest valid value Smallest -1 Expected result Accepts it Reject, error msg

0
Blank 49 50 51 -1

0 is always interesting
Empty field, whats it do? Valid value Largest valid value Largest +1 Negative number

Reject, error msg


Reject? Ignore? Accepts it Accepts it Reject, error msg Reject, error msg

4294967296

2^32, overflow integer?

Reject, error msg

Discussion 2.4: Where Do Test Ideas Come From?


Where would you derive Test Ideas Lists?
Models Specifications Customer complaints Brainstorm sessions among colleagues

A Catalog of Test Ideas for Integer-Input tests


Nothing Valid value At LB of value At UB of value At LB of value - 1 At UB of value + 1 Outside of LB of value Outside of UB of value 0 Negative At LB number of digits or chars At UB number of digits or chars Empty field (clear the default value) Outside of UB number of digits or chars Non-digits Wrong data type (e.g. decimal into integer) Expressions Space Non-printing char (e.g., Ctrl+char) DOS filename reserved chars (e.g., "\ * . :") Upper ASCII (128-254) Upper case chars Lower case chars Modifiers (e.g., Ctrl, Alt, ShiftCtrl, etc.) Function key (F2, F3, F4, etc.)

The Test-Ideas Catalog


A test-ideas catalog is a list of related test ideas that are usable under many circumstances.
For example, the test ideas for numeric input fields can be catalogued together and used for any numeric input field.

In many situations, these catalogs are sufficient test documentation. That is, an experienced tester can often proceed with testing directly from these without creating documented test cases.

Apply a Test Ideas Catalog Using a Test Matrix

Field name Field name Field name

Review: Core Concepts of Software Testing


What is Quality? Who are the Stakeholders? What is a Defect? What are Dimensions of Quality? What are Test Ideas? Where are Test Ideas useful? Give some examples of a Test Ideas. Explain how a catalog of Test Ideas could be applied to a Test Matrix.

Principles of Software Testing for Testers


Module 4: Define Evaluation Mission

So? Purpose of Testing?


The typical testing group has two key priorities.
Find the bugs (preferably in priority order). Assess the condition of the whole product (as a user will see it).

Sometimes, these conflict


The mission of assessment is the underlying reason for testing, from managements viewpoint. But if you arent hammering hard on the program, you can miss key risks.

Missions of Test Groups Can Vary


Find defects Maximize bug count Block premature product releases Help managers make ship / no-ship decisions Assess quality Minimize technical support costs Conform to regulations Minimize safety-related lawsuit risk Assess conformance to specification Find safe scenarios for use of the product (find ways to get it to work, in spite of the bugs) Verify correctness of the product Assure quality

A Different Take on Mission: Public vs. Private Bugs


A programmers public bug rate includes all bugs left in the code at check-in. A programmers private bug rate includes all the bugs that are produced, including the ones fixed before check-in. Estimates of private bug rates have ranged from 15 to 150 bugs per 100 statements. What does this tell us about our task?

Defining the Test Approach


The test approach (or testing strategy) specifies the techniques that will be used to accomplish the test mission. The test approach also specifies how the techniques will be used. A good test approach is:
Diversified Risk-focused Product-specific Practical Defensible

Heuristics for Evaluating Testing Approach


James Bach collected a series of heuristics for evaluating your test approach. For example, he says:
Testing should be optimized to find important problems fast, rather than attempting to find all problems with equal urgency.

Please note that these are heuristics they wont always the best choice for your context. But in different contexts, youll find different ones very useful.

What Test Documentation Should You Use?


Test planning standards and templates
Examples Some benefits and costs of using IEEE-829 standard based templates When are these appropriate?

Thinking about your requirements for test documentation


Requirements considerations Questions to elicit information about test documentation requirements for your project

Write a Purpose Statement for Test Documentation


Try to describe your core documentation requirements in one sentence that doesnt have more than three components. Examples:
The test documentation set will primarily support our efforts to find bugs in this version, to delegate work, and to track status. The test documentation set will support ongoing product and test maintenance over at least 10 years, will provide training material for new group members, and will create archives suitable for regulatory or litigation use.

Review: Define Evaluation Mission


What is a Test Mission? What is your Test Mission? What makes a good Test Approach (Test Strategy)? What is a Test Documentation Mission? What is your Test Documentation Goal?

Principles of Software Testing for Testers


Module 5: Test & Evaluate

Test and Evaluate Part One: Test


In this module, we drill into Test and Evaluate This addresses the How? question:
How will you test those things?

Test and Evaluate Part One: Test


This module focuses on the activity Implement Test Earlier, we covered Test-Idea Lists, which are input here In the next module, well cover Analyze Test Failures, the second half of Test and Evaluate

Review: Defining the Test Approach


In Module 4, we covered Test Approach A good test approach is: Diversified Risk-focused Product-specific Practical Defensible The techniques you apply should follow your test approach

Discussion Exercise 5.1: Test Techniques


There are as many as 200 published testing techniques. Many of the ideas are overlapping, but there are common themes. Similar sounding terms often mean different things, e.g.:
User testing Usability testing User interface testing

What are the differences among these techniques?

Dimensions of Test Techniques


Think of the testing you do in terms of five dimensions:
Testers: who does the testing. Coverage: what gets tested. Potential problems: why you're testing (what risk you're testing for). Activities: how you test. Evaluation: how to tell whether the test passed or failed.

Test techniques often focus on one or two of these, leaving the rest to the skill and imagination of the tester.

Test TechniquesDominant Test Approaches


Of the 200+ published Functional Testing techniques, there are ten basic themes. They capture the techniques in actual practice. In this course, we call them:
Function testing Equivalence analysis Specification-based testing Risk-based testing Stress testing Regression testing Exploratory testing User testing Scenario testing Stochastic or Random testing

So Which Technique Is the Best?


Each has strengths and weaknesses Think in terms of complement

Technique A
Technique H Technique B

Testers Coverage TechniquePotential problems G Technique C There is no Activities one true way Technique Evaluation F Technique D

Mixing techniques can improve coverage

Technique E

Apply Techniques According to the LifeCycle


Test Approach changes over the project Some techniques work well in early phases; others in later ones Align the techniques to iteration objectives
Inception Elaboration Construction Transition

A limited set of focused tests

Many varied tests Large system under test Complex test environment Focus on deployment risks

A few components of software under test


Simple test environment Focus on architectural & requirement risks

Module 5 Agenda
Overview of the workflow: Test and Evaluate Defining test techniques Individual techniques
Function testing Equivalence analysis Specification-based testing Risk-based testing Stress testing Regression testing Exploratory testing User testing Scenario testing Stochastic or Random testing

Using techniques together

At a Glance: Function Testing


Tag line Objective Testers Coverage Black box unit testing Test each function thoroughly, one at a time. Any Each function and user-visible variable

Potential problems Activities Evaluation


Complexity Harshness SUT readiness

A function does not work in isolation Whatever works Whatever works


Simple Varies Any stage

Strengths & Weaknesses: Function Testing


Representative cases
Spreadsheet, test each item in isolation. Database, test each report in isolation

Strengths
Thorough analysis of each item tested Easy to do as each function is implemented

Blind spots
Misses interactions Misses exploration of the benefits offered by the program.

At a Glance: Equivalence Analysis (1/2)


Tag line Objective Testers Coverage Partitioning, boundary analysis, domain testing There are too many test cases to run. Use stratified sampling strategy to select a few test cases from a huge population. Any All data fields, and simple combinations of data fields. Data fields include input, output, and (to the extent they can be made visible to the tester) internal and configuration variables Data, configuration, error handling

Potential problems

At a Glance: Equivalence Analysis (2/2)


Divide the set of possible values of a field into subsets, pick values to represent each subset. Typical values will be at boundaries. More generally, the goal is to find a best Activities representative for each subset, and to run tests with these representatives. Advanced approach: combine tests of several best representatives. Several approaches to choosing optimal small set of combinations. Evaluation Determined by the data Complexity Simple Designed to discover harsh single-variable Harshness tests and harsh combinations of a few variables SUT readiness Any stage

Strengths & Weaknesses: Equivalence Analysis


Representative cases
Equivalence analysis of a simple numeric field. Printer compatibility testing (multidimensional variable, doesnt map to a simple numeric field, but stratified sampling is essential)

Strengths
Find highest probability errors with a relatively small set of tests. Intuitively clear approach, generalizes well

Blind spots
Errors that are not at boundaries or in obvious special cases. The actual sets of possible values are often unknowable.

Optional Exercise 5.2: GUI Equivalence Analysis


Pick an app that you know and some dialogs
MS Word and its Print, Page setup, Font format dialogs

Select a dialog
Identify each field, and for each field What is the type of the field (integer, real, string, ...)? List the range of entries that are valid for the field Partition the field and identify boundary conditions List the entries that are almost too extreme and too extreme for the field List a few test cases for the field and explain why the values you chose are the most powerful representatives of their sets (for showing a bug) Identify any constraints imposed on this field by other fields

At a Glance: Specification-Based Testing


Tag line Objective Testers Coverage Potential problems Activities Evaluation Complexity Harshness SUT readiness Verify every claim Check conformance with every statement in every spec, requirements document, etc. Any Documented reqts, features, etc. Mismatch of implementation to spec Write & execute tests based on the specs. Review and manage docs & traceability Does behavior match the spec? Depends on the spec Depends on the spec As soon as modules are available

Strengths & Weaknesses: Spec-Based Testing


Representative cases
Traceability matrix, tracks test cases associated with each specification item. User documentation testing

Strengths
Critical defense against warranty claims, fraud charges, loss of credibility with customers. Effective for managing scope / expectations of regulatory-driven testing Reduces support costs / customer complaints by ensuring that no false or misleading representations are made to customers.

Blind spots
Any issues not in the specs or treated badly in the specs /documentation.

Traceability Tool for Specification-Based Testing


The Traceability Matrix
Stmt 1 Stmt 2 Stmt 3 Stmt 4 Stmt 5 Test 1 X Test 2 Test 3 X Test 4 Test 5 Test 6 X X X X X X X X X X X X

Optional Exercise 5.5: What Specs Can You Use?


Challenge:
Getting information in the absence of a spec What substitutes are available?

Example:
The user manual think of this as a commercial warranty for what your product does.

What other specs can you/should you be using to test?

Exercise 5.5Specification-Based Testing


Here are some ideas for sources that you can consult when specifications are incomplete or incorrect.
Software change memos that come with new builds of the program User manual draft (and previous versions manual) Product literature Published style guide and UI standards

DefinitionsRisk-Based Testing
Three key meanings:
1. Find errors (risk-based approach to the technical tasks of testing) 2. Manage the process of finding errors (risk-based test management) 3. Manage the testing project and the risk posed by (and to) testing in its relationship to the overall project (risk-based project management)

Well look primarily at risk-based testing (#1), proceeding later to risk-based test management. The project management risks are very important, but out of scope for this class.

At a Glance: Risk-Based Testing


Tag line Find big bugs first Define, prioritize, refine tests in terms of Objective the relative risk of issues we could test for Testers Any Coverage By identified risk Potential problems Identifiable risks Use qualities of service, risk heuristics and Activities bug patterns to identify risks Evaluation Varies Complexity Any Harshness Harsh SUT readiness Any stage

Strengths & Weaknesses: Risk-Based Testing


Representative cases
Equivalence class analysis, reformulated. Test in order of frequency of use. Stress tests, error handling tests, security tests. Sample from predicted-bugs list.

Strengths
Optimal prioritization (if we get the risk list right) High power tests

Blind spots
Risks not identified or that are surprisingly more likely. Some risk-driven testers seem to operate subjectively. How will I know what coverage Ive reached? Do I know that I havent missed something critical?

Optional Exercise 5.6: Risk-Based Testing


You are testing Amazon.com
(Or pick another familiar application)

First brainstorm:
What are the functional areas of the app?

Then evaluate risks:


What are some of the ways that each of these could fail? How likely do you think they are to fail? Why? How serious would each of the failure types be?

At a Glance: Stress Testing


Tag line Objective Testers Coverage Potential problems Activities Evaluation Complexity Harshness SUT readiness
Overwhelm the product Learn what failure at extremes tells about changes needed in the programs handling of normal cases

Specialists Limited Error handling weaknesses Specialized Varies Varies Extreme Late stage

Strengths & Weaknesses: Stress Testing


Representative cases
Buffer overflow bugs High volumes of data, device connections, long transaction chains Low memory conditions, device failures, viruses, other crises Extreme load

Strengths
Expose weaknesses that will arise in the field. Expose security risks.

Blind spots
Weaknesses that are not made more visible by stress.

At a Glance: Regression Testing


Tag line Objective Testers Coverage Potential problems Automated testing after changes Detect unforeseen consequences of change Varies Varies Side effects of changes Unsuccessful bug fixes Create automated test suites and run against Activities every (major) build Complexity Varies Evaluation Varies Harshness Varies SUT readiness For unit early; for GUI - late

Strengths & WeaknessesRegression Testing


Representative cases
Bug regression, old fix regression, general functional regression Automated GUI regression test suites

Strengths
Cheap to execute Configuration testing Regulator friendly

Blind spots
Immunization curve Anything not covered in the regression suite Cost of maintaining the regression suite

At a Glance: Exploratory Testing


Tag line Objective Testers Coverage Potential problems Activities Evaluation Complexity Harshness SUT readiness
Simultaneous learning, planning, and testing Simultaneously learn about the product and about the test strategies to reveal the product and its defects

Explorers Hard to assess Everything unforeseen by planned testing techniques Learn, plan, and test at the same time Varies Varies Varies Medium to late: use cases must work

Strengths & Weaknesses: Exploratory Testing


Representative cases
Skilled exploratory testing of the full product Rapid testing & emergency testing (including thrownover-the-wall test-it-today) Troubleshooting / follow-up testing of defects.

Strengths
Customer-focused, risk-focused Responsive to changing circumstances Finds bugs that are otherwise missed

Blind spots
The less we know, the more we risk missing. Limited by each testers weaknesses (can mitigate this with careful management) This is skilled work, juniors arent very good at it.

At a Glance: User Testing


Tag line Objective Testers Coverage Potential problems Activities Evaluation Complexity Harshness SUT readiness Strive for realism Lets try real humans (for a change) Identify failures in the overall human/machine/software system. Users Very hard to measure Items that will be missed by anyone other than an actual user Directed by user Users assessment, with guidance Varies Limited Late; has to be fully operable

Strengths & WeaknessesUser Testing


Representative cases
Beta testing In-house lab using a stratified sample of target market Usability testing

Strengths
Expose design issues Find areas with high error rates Can be monitored with flight recorders Can use in-house tests focus on controversial areas

Blind spots
Coverage not assured Weak test cases Beta test technical results are mixed Must distinguish marketing betas from technical betas

At a Glance: Scenario Testing


Instantiation of a use case Do something useful, interesting, and complex Objective Challenging cases to reflect real use Testers Any Coverage Whatever stories touch Potential Complex interactions that happen in real use problems by experienced users Interview stakeholders & write screenplays, Activities then implement tests Evaluation Any Complexity High Harshness Varies SUT readiness Late. Requires stable, integrated functionality. Tag line

Strengths & Weaknesses: Scenario Testing


Representative cases
Use cases, or sequences involving combinations of use cases. Appraise product against business rules, customer data, competitors output Hans Buwaldas soap opera testing.

Strengths
Complex, realistic events. Can handle (help with) situations that are too complex to model. Exposes failures that occur (develop) over time

Blind spots
Single function failures can make this test inefficient. Must think carefully to achieve good coverage.

At a Glance: Stochastic or Random Testing (1/2)


Tag line Monkey testing High-volume testing with new cases all the time Have the computer create, execute, and evaluate huge numbers of tests. The individual tests are not all that powerful, nor all that compelling. The power of the approach lies in the large number of tests. These broaden the sample, and they may test the program over a long period of time, giving us insight into longer term issues.

Objective

At a Glance: Stochastic or Random Testing (2/2)


Testers Coverage Potential problems Activities Evaluation Complexity Harshness Machines Broad but shallow. Problems with stateful apps. Crashes and exceptions Focus on test generation Generic, state-based Complex to generate, but individual tests are simple Weak individual tests, but huge numbers of them Any

SUT readiness

Combining Techniques (Revisited)


A test approach should be diversified Applying opposite techniques can improve coverage Often one technique can Technique A extend another
Technique H Technique B
Testers Coverage Technique G Technique C Potential problems Activities Technique Evaluation F Technique D

Technique E

Applying Opposite Techniques to Boost Coverage


Contrast these two techniques
Regression Exploration
Inputs: Inputs: Old test cases and models or other analyses analyses leading to new that yield new tests test cases Outputs Outputs: scribbles and bug reports Archival test cases, Better for: preferably well Find new bugs, scout documented, and bug new areas, risks, or ideas reports Better for: Reuse across multiversion products Exploration Regression

Applying Complementary Techniques Together


Regression testing alone suffers fatigue
The bugs get fixed and new runs add little info

Symptom of weak coverage Combine automation w/ suitable variance


E.g. Risk-based equivalence analysis

Coverage of the combination can beat sum of the parts

Equivalence Risk-based Regression

How To Adopt New Techniques


1. Answer these questions:
What techniques do you use in your test approach now? What is its greatest shortcoming? What one technique could you add to make the greatest improvement, consistent with a good test approach: Risk-focused? Product-specific? Practical? Defensible?

2. Apply that additional technique until proficient 3. Iterate

Das könnte Ihnen auch gefallen