Sie sind auf Seite 1von 109

Testing Course

explaining the ISTQB Certified Tester Foundation Level Syllabus

1.1.1 Why is testing necessary Why do we test?


(the common answer is :) To find bugs! ...but consider also: To reduce the impact of the failures at the clients site (live defects) and ensure that they will not affect costs & profitability To decrease the rate of failures (increase the products reliability) To improve the quality of the product To ensure requirements are implemented fully & correctly To validate that the product is fit for its intended purpose To verify that required standards and legal requirements are met To maintain the company reputation Testing provides the products measure of quality!

Can we test everything? Exhaustive testing is possible? -No, sorry time & resources make it impractical !but, instead: -We must understand the risk to the clients business of the software not functioning correctly We must manage and reduce risk, carry out a Risk Analysis of the application Prioritise tests to focus them (time & resources) on the main areas of risk
2

1.1.1 Why is testing necessary - Testing goals and objectives

The main goals of testing: Find defects Assess the level of quality of the software product and providing related information to the stakeholders Prevent defects Reduce risk of operational incidents Increase the product quality Different viewpoints and objectives: Unit & integration test find as many defects as possible Acceptance testing confirm that the system works as specified and that the quality is good enough Testing metrics gathering provide information to the project manager about the product quality and the risks involved Design tests early and review requirements help prevent defects
3

1.1.2 Why is testing necessary - Testing Glossary

A programmer (or analyst) can make an error (mistake), which produces a defect (fault, bug) in the programs code. If such a defect in the code is executed, the system will fail to do what it should do (or it will do something it should not do), causing a failure. Error (mistake) = a human action that produces an incorrect result Defect (bug) = a flaw that can cause the component or system to fail to perform its required function Failure = deviation of the component or system from its expected delivery, service or result Anomaly = any condition that deviates from expectations based on requirements specifications, design documents, user documentation, standards or someones perceptions or expectations Defect masking = An occurrence in which one defect prevents the detection of another.
4

1.1.2 Why is testing necessary Causes of the errors


Defects are caused by human errors! Why? Because of: Time pressure - the more pressure we are under the more likely we are to make mistakes Code complexity or new technology Too many system interactions Requirements not clearly defined, changed & not properly documented We make wrong assumptions about missing bits of information! Poor communication Poor training

1.1.2 Why is testing necessary - Causes of software defects Defects taxonomy


(Boris Beizer) Requirements (incorrect, logic, completeness, verifiability, documentation, changes) Features and functionality (correctness, missing case, domain and boundary, messages, exception mishandled) Structural (control flow, sequence, data processing) Data (definition, structure, access, handling) Implementation and Coding Integration (internal and external interfaces) System (architecture, performance, recovery, partitioning, environment) Test definition and execution (test design, test execution, documentation, reporting) (Cem Kaner (b1)) User interface (functionality, communication, missing, performance, output) Error handling (prevention, detection, recovery) Boundary (numeric, loops) Calculation (wrong constants, wrong operation order, over & underflow) Initialization (data item, string, loop control) Control flow (stop, crash, loop, if-then-else,) Data handling (data type, parameter list, values) Race & Load conditions (event sequence, no resources) Source and version control (old bug reappear) 6 Testing (fail to notice, fail to test, fail to report)

1.1.3 Why is testing necessary - The role of testing in the software life cycle
Testers do cooperate with: Analysts to review the specifications for completeness and correctness, ensure that they are testable Designers to improve interfaces testability and usability Programmers to review the code and assess structural flaws Project manager to estimate, plan, develop test cases, perform tests and report bugs, to assess the quality and risks Quality assurance staff to provide defect metrics Interactions with these project roles are very complex.
RACI matrix (Responsible, Accountable, Consulted, Informed)

1.1.4 Why is testing necessary What is quality?


Quality (ISO) = The totality of the characteristics of an entity that bear on its ability to satisfy stated or implied needs there are many more definitions

Testing means not only to Verify (the thing is done right) but also to Validate (the right thing is done)!
Software quality includes: reliability, usability, efficiency, maintainability and portability. RELIABILITY: The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations. USABILITY: The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions. EFFICIENCY: The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions. MAINTAINABILITY: The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment. PORTABILITY:The ease with which the software product can be transferred 8 from one hardware or software environment to another.

1.1.4 Why is testing necessary - Testing and quality


Testing does not inject Quality into the product, but will measure the level of the Quality of the product.
Quality can be measured for: progress variance (planned versus actual)

Measuring product quality: Functional compliance - functional software requirements testing Non functional requirements Test coverage criteria Defect count or defect trend criteria

1.1.4 Why is testing necessary quality attributes


The QUINT model (extended ISO model)

10

1.1.5 Why is testing necessary - How much testing is enough?

The five basic criteria often used to decide when to stop testing are:

Previously defined coverage goals have been met The defect discovery rate has dropped below a previously defined threshold The cost of finding the "next" defect exceeds the expected loss from that defect The project team reaches consensus that it is appropriate to release the product The manager decides to deliver the product
All these criteria are risk based. It is important not depending on only one stopping criterion. Software Reliability Engineering can help also to determine when to stop testing, by taking into consideration aspects like failure intensity.

11

1.2 What is testing - Definition of testing


Testing = the process concerned with planning the necessary static and dynamic activities, preparation and evaluation of software products and related deliverables, in order to: determine that they satisfy specified requirements demonstrate that they are fit for the intended use detect defects, help and motivate the developers to fix them measure, assess and improve the quality of the software product Testing should be performed throughout the whole software life cycle

There are two basic types of testing: execution and non-execution based
Other definitions: (IEEE) Testing = the process of analyzing a software item to detect the differences between existing and required conditions and to evaluate its features (Myers (b3)) Testing = the process of executing a program with the intent of finding errors (Craig & Jaskiel (b5)) Testing = a concurrent lifecycle process of engineering, using and maintaining test-ware in order to measure and improve the quality of the software being tested 12

1.2 What is testing Testing schools


Analytic School - testing is rigorous, academic and technical
Testing is a branch of CS/Mathematics Testing techniques must have a logic-mathematical form Key Question: Which techniques should we use? Require precise and detailed specifications

Factory School - testing is a way to measure progress, with emphasis on cost and repeatable standards
Testing must be managed & cost effective Testing validates the product & measures development progress Key Questions: How can we measure whether were making progress, When will we be done? Require clear boundaries between testing and other activities (start/stop criteria) Encourage standards (V-model), best practices, and certification

Quality School - emphasizes process & quality, acting as the gatekeeper


Software quality requires discipline Testers may need to police developers to follow the rules Key Question: Are we following a good process? Testing is a stepping stone to process improvement

Context-Driven School - emphasizes people, setting out to find the bugs that will be most important to stakeholders
Software is created by people. People set the context Testing finds bugs acting as a skilled, mental activity Key Question: What testing would be most valuable right now? Expect changes. Adapt testing plans based on test results Testing research requires empirical and psychological study

13

1.3 General testing principles


Testing shows presence of defects, but cannot prove that there are no more defects; testing can only reduce the probability of undiscovered defects
Complete, exhaustive testing is impossible; good strategy and risk management must be used Pareto rule (defect clustering): usually 20% of the modules contain 80% of the bugs Early testing: testing activities should start as soon as possible (including here planning, design, reviews) Pesticide paradox: if the same set of tests are repeated over again, no new bugs will be found; the test cases should be reviewed, modified and new test cases developed

Context dependence: test design and execution is context dependent (desktop, web applications, real-time, )
Verification and Validation: discovering defects cannot help a product that is not fit 14 to the users needs

1.3 General testing principles heuristics of software testing

Operability - The better it works, the more efficiently it can be tested Observability - What we see is what we test Controllability - The better we control the software, the more the testing process can be automated and optimized Decomposability - By controlling the scope of testing, we can quickly isolate problems and perform effective and efficient testing Simplicity - The less there is to test, the more quickly we can test it Stability - The fewer the changes, the fewer are the disruptions to testing Understandability - The more information we will have, the smarter we will test Suitability - The more we know about the intended use of the software, the better we can organize our testing to find important bugs
15

1.4.1 Fundamental test process phases

-Test Planning & Test control -Test Analysis & Design -Test Implementation & Execution -Evaluating exit criteria & Reporting -Test Closure activities

16

1.4.1 Fundamental test process planning & control


Planning 1. Determine scope Study project documents, used software life-cycle specifications, product desired quality attributes Clarify test process expectations 2. Determine risks Choose quality risk analysis method (e.g. FMEA) Document the list of risks, probability, impact, priority, identify mitigation actions 3. Estimate testing effort, determine costs, develop schedule Define necessary roles Decompose test project into phases and tasks (WBS) Schedule tasks, assign resources, set-up dependencies 4. Refine plan Select test strategy (how to do it, what test types at which test levels) Select metrics to be used for defect tracking, coverage, monitoring Define entry and exit criteria Control

Measure and analyze results Monitor testing progress, coverage, exit criteria Assign or reallocate resources, update the test plan schedule Initiate corrective actions Make decisions
17

1.4.2 Fundamental test process analysis & design

Reviewing the test basis (such as requirements, architecture, design, interfaces). Identifying test conditions or test requirements and required test data based on analysis of test items and the specification. Designing the tests: Choose test techniques Identify test scenarios, pre-conditions, expected results, postconditions Identify possible test Oracles Evaluating testability of the requirements and system. Designing the test environment set-up and identifying any required infrastructure and tools.

(see Lee Copeland (b2))

18

1.4.2 Fundamental test process what is a test Oracle?


The expected result (test outcome) must be defined at the test analysis stage Who will decide that (expected result = actual result), when the test will be executed? The Test Oracle!

Test Oracle = A source to determine expected result, a principle or mechanism to recognize a problem. The Test Oracle can be: an existing system (the old version) a document (specification, user manual) a competent client representative but never the source code itself ! Oracles in use = Simplification of Risk : do not assess pass - fail, but instead problem - no problem Problem: Oracles and Automation - Our ability to automate testing is fundamentally constrained by our ability to create and use oracles; Possible issues: false alarms missed bugs

19

1.4.3 Fundamental test process implementation & execution


Test implementation: Develop and prioritize test cases, create test data, test harnesses and automation scripts Create test suites from the test cases Check test environment Test execution: Execute (manually or automatically) the test cases (suites) Use Test Oracles to determine if test passed or failed Login the outcome of tests execution Report incidents (bugs) and try to discover if they are caused by the test data, test procedure or they are defect failures Expand test activities as necessary, according to the testing mission (see Rex Black (b4))

20

1.4.3 Fundamental test process prioritizing the Test Cases


Why prioritize the Test Cases? It is not possible to test everything, we must do our best in the time available Testing must be Risk based, assuring that the errors, that will get through to the clients production system, will have the smallest possible impact and frequency of occurrence This means we must prioritise and focus testing on the priorities

What to watch?
Severity of possible defects Probability of possible defects Visibility of possible defects Client Requirement importance Business or technical criticality of a feature Frequency of changes applied to a module Scenarios complexity
21

1.4.4 Fundamental test process evaluating exit criteria and reporting

Evaluate exit criteria: Check test logs against exit criteria specified in test mission definition Assess if more tests are needed Check if testing mission should be changed

Test reporting: Write the test summary report for the stakeholders use The test summary report should include: Test Cases execution coverage (% executed ) Test Cases Pass / Fail % Active bugs, sorted according to their severity (see Rex Black (b4) & RUP- Test discipline(s5))

22

1.4.5 Fundamental test process test closure

Verify if test deliverables have been delivered

Check and close the remaining active bug reports


Archiving the test-ware and environment Handover of the test environment Analyze the identified test process problems (lessons learned) Implement action plan based improvements (see Rex Black (b4))

23

1.5 The psychology of testing


Testing is regarded as a destructive activity (we run tests to make the software fail) A good tester: Should always have a critical approach Must keep attention to detail Must have analytical skills Should have good verbal and written communication skills Must analyse incomplete facts Must work with incomplete facts Must learn quickly about the product being tested Should be able to quickly prioritise Should be a planned, organised kind of person Also, he must have a good knowledge about: The customers business workflows The product architecture and interfaces The software project process Testing techniques and practices Rex Blacks Top 10 professional errors Fall in Love with a Tool Write Bad Bug Reports Fail to Define the Mission Ignore a Key Stakeholder Deliver Bad News Badly Take Sole Responsibility for Quality Be an Un-appointed Process Cop Fail to Fire Someone who Needs Firing Forget Youre Providing a Service Ignore Bad Expectations (see also Brian Maricks article)

24

1.5 The psychology of testing


The best tester isnt the one who finds the most bugs, the best tester is the one who gets the most bugs fixed (Cem Kaner)
Selling bugs (see Cem Kaner (c1) Motivate the programmer Demonstrate the bug effects Overcome objections Increase the defect description coverage (indicate detailed preconditions, behavior) Analyze the failure Produce a clear, short, unambiguous bug report Advocate error costs Levels of Independence of the Testing Team: Low Developers write and execute their own tests Medium Tests are written and executed by another developer High Tests written and executed by an independent testing team (internal or external) Testers Agile Manifesto (Jonathan Kohl) bug advocacy over bug counts testable software over exhaustive (requirements) docs measuring product success over measuring process success team collaboration over departmental independence

25

2.1.1 The V testing model

26

2.1.1 The V testing model Verification & Validation Verification = Confirmation by examination and through the provision of objective evidence that specified requirements have been fulfilled. Validation = Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. Verification is the dominant activity in the Unit, Integration, System testing levels, Validation is a mandatory activity in the Acceptance testing level

27

2.1.1 The W testing model dynamic testing

28

2.1.1 The W testing model static testing

29

2.1.2 Software development models - Waterfall

30

2.1.2 Software development models - Waterfall

Waterfall weaknesses Linear: any attempt to go back two or more phases to correct a problem or deficiency results in major increases in cost and schedule Integration problems usually surface too late. Previously undetected errors or design deficiencies will emerge, adding risk with little time to recover Users can't see quality until the end. They can't appreciate quality if the finished product can't be seen Deliverables are created for each phase and are considered frozen. If the deliverable of a phase changes, which often happens, the project will suffer schedule problems The entire software product is being worked on at one time. There is no way to partition the system for delivery of pieces of the system

31

2.1.2 Software development models - Rapid Prototype Model

32

2.1.2 Software development models - Rapid Prototype Model

Rapid Prototype Model weaknesses In the rush to create a working prototype, overall software quality or long-term maintainability may be overlooked Tendency for difficult problems to be pushed to the future, causing the initial promise of the prototype to not be met by subsequent products Developers may fall into a code-and-fix cycle, leading to expensive, unplanned prototype iterations Customers become frustrated without the knowledge of the exact number of iterations that will be necessary Users may have a tendency to add to the list of items to be prototyped until the scope of the project far exceeds the feasibility study

33

2.1.2 Software development models - Incremental Model

34

2.1.2 Software development models - Incremental Model

Incremental Model weaknesses Definition of a complete, fully functional system must be done early in the life cycle to allow for the definition of the increments The model does not allow for iterations within each increment Because some modules will be completed long before others, welldefined interfaces are required Requires good planning and design: Management must take care to distribute the work; the technical staff must watch dependencies

35

2.1.2 Software development models - Spiral Model

36

2.1.2 Software development models - Spiral Model

Spiral Model weaknesses The model is complex, and developers, managers, and customers may find it too complicated to use Considerable risk assessment expertise is required Hard to define objective, verifiable milestones that indicate readiness to proceed through the next iteration May be expensive - time spent planning, resetting objectives, doing risk analysis, and prototyping may be excessive

37

2.1.2 Software development models - Rational Unified Process

38

2.1.3 Software development models Testing life cycle

For each software activity, there must be a corresponding testing activity The objectives of the testing are specific to that tested activity

Plan, analysis and design of a testing activity should be done during the corresponding development activity
Review and inspection must be considered as part of testing activities

39

2.2.1 Test levels Component testing Target: single software modules, components that are separately testable

Access to the code being tested is mandatory, usually involves the programmer
May consist of: o o o Functional tests Non-functional tests (stress test) Structural tests (statement coverage, branch coverage)

Test cases follow the low level specification of the module Can be automated (test driven software development): o o o Develop first test code Then, write the code to be tested Execute until pass

Also named Unit testing


40

Good programming style (Design-by-contract, respecting Demeters law) enhance the efficiency of Unit testing

2.2.2 Test levels Integration testing

Target: the interfaces between components and interfaces with other parts of the system We focus on the data exchanged, not on the tested functionalities. Product software architecture understanding is critical May consist of: o Functional tests o Non-functional tests (ex: performance test) o Component integration testing (after component testing) o System integration testing (after system testing) Test strategy may be bottom-up, top-down or big-bang

41

2.2.2 Test levels Component Integration testing Component integration testing (done after Component testing) : Linking a few components to check that they communicate correctly Iteratively linking more components together Verify that data is exchanged between the components as required Increase the number of components, create & test subsystems and finally the complete system Drivers and Stubs should be used when necessary: driver: A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. stub: A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.
42

2.2.2 Test levels System Integration testing

System integration testing (done after System or Acceptance testing) :


Testing the integration of systems and packages; testing interfaces to external organizations We check the data exchanged between our system and other external systems. Additional difficulties: Multiple Platforms Communications between platforms Management of the environments Approaches to access the external systems: Testing in a test environment Testing in a clone of a production environment Testing in a real production environment
43

2.2.3 Test levels System testing System testing = The process of testing an integrated system to verify that it meets specified requirements

Target: the whole product (system) as defined in the scope document


Environment issues are critical May consist of: o o o Functional tests, based on the requirement specifications Non-functional tests (ex: load tests) Structural tests (ex: web page links, or menu item coverage)

Black box testing techniques may be used (ex: business rule decision table) Test strategy may be risk based Test coverage is monitored
44

2.2.4 Test levels Acceptance testing Acceptance Testing = Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system The main goals: establish confidence in the system is the product good enough to be delivered to the client? The main focus is not to find defects, but to assess the readiness for deployment It is not necessary the final testing level; a final system integration testing session can be executed after the acceptance tests May be executed also after component testing (component usability acceptance) Usually involves client representatives Typical forms: User acceptance: business aware users verify the main features Operational acceptance testing: backup-restore, security, maintenance Alpha and Beta testing: performed by customers or potential users Alpha : at the developers site Beta : at the customers site

45

2.3.1 Test types Functional testing Target: Test the functionalities (features) of a product Specification based: uses Test Cases, derived from the specifications (Use Cases) business process based, using business scenarios Focused on checking the system against the specifications Can be performed at all test levels Considers the external behavior of the system

Black box design techniques will be used


Security testing is part of functional testing, related to the detection of threats

46

2.3.2 Test types Non-Functional testing

Non functional testing = testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability Targeted to test the product quality attributes: Performance testing Load testing (how much load can be handled by the system?) Stress testing (evaluate system behavior at limits and out of limits) Usability testing Reliability testing Portability testing Maintainability testing

47

2.3.2 Test types Non-Functional testing - Usability

Usability testing = used to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions People selected from the potential users may be involved to study how they use the system

A quick and focused beta-test may be a cheap way of doing Usability testing
There is no simple way to examine how people will use the system

Easy to understood is not the same as easy to learn or as easy to use or as easy to operate

48

2.3.2 Test types Non-Functional testing - Instalability

Instalability testing = The process of testing the installability of a software product Does the installation work? How easy is to install the system?

Does installation affect other software?


Does the environment affect the product? Does it uninstall correctly?

49

2.3.2 Test types Non-Functional testing Load, Stress, Performance, Volume testing Load Test = A test type concerned with measuring the behavior of a component or system with increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or system Stress Test = Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. Performance Test = The process of testing to determine the performance of a software product. Performance can be measured watching: Response time Throughput Resources utilization Spike Test = Keeping the system, periodically, for short amounts of time, beyond its specified limits Endurance Test = a Load Test performed for a long time interval (week(s)) Volume Test = Testing where the system is subjected to large volumes of data
50

2.3.3 Test types Structural testing Targeted to test: o o internal structure (component) architecture (system)

Uses only white box design techniques

Can be performed at all test levels


Used also to help measure the coverage (% of items being covered by tests) Tool support is critical

51

2.3.4 Test types Confirmation & regression testing

Confirmation testing = Re-testing of a module or product, to confirm that the previously detected defect was fixed Implies the use of a bug tracking tool Confirmation testing is not the same as the debugging (debugging is a development activity, not a testing activity)

Regression testing = Re-testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered as a result of the changes made. It is performed when the software or its environment is changed Can be performed at all test levels Can be automated (because of cost and schedule reasons)

52

2.4 Maintenance testing

Maintenance testing = Testing the changes to an operational system or the impact of a changed environment to an operational system
Done on an existing operational system, triggered by modification, retirement or migration of the software Include: Release based changes Corrective changes Database upgrades Regression testing is also involved Impact analysis = determining how the existing system may be affected by changes (used to help decide how much regression testing to do)

53

3.1 Reviews and the testing process Static Testing = testing of a component or system at specification or implementation level without execution of that software, e.g. reviews (manual) or static code analysis (automated)

Reviews
Why review? To identify errors as soon as possible in the development lifecycle Reviews offer the chance to find omissions and errors in the software specifications The target of a review is a software deliverable: Specification Use case Design Code Test case Manual

54

3.1 Reviews and the testing process When to review? As soon as an software artifact is produced, before it is used as the basis for the next step in development Benefits include: Early defect detection Reduced testing costs and time Can find omissions Risks: If misused they can lead to project team members frictions The errors & omissions found should be regarded as a positive issue The author should not take the errors & omissions personally No follow up to is made to ensure correction has been made Witch-hunts used when things are going wrong

55

3.2.1 Phases of a formal review

Formal review phases: Planning: define scope, select participants, allocate roles, define entry & exit criteria Kick-off: distribute documents, explain objectives, process, check entry criteria Individual preparation: each of participants studies the documents, takes notes, issues questions and comments Review meeting: meeting participants discuss and log defects, make recommendations Rework: fixing defects (by the author) Follow-up: verify again, gather metrics, check exit criteria

56

3.2.2 Roles in a formal review

The formal reviews can use the following predefined roles:

Manager: schedules the review, monitor entry and exit criteria Moderator: distributes the documents, leads the discussion, mediates various conflicting opinions Author: owner of the deliverable to be reviewed Reviewer: technical domain experts, identify and note findings Scribe: records and documents the discussions during the meeting

57

3.2.3 Types of review


Informal review A peer or team lead reviews a software deliverable Without applying a formal process Documentation of the review is optional Quick way of finding omissions and defects Amplitude and depth of the review depends on the reviewer Main purpose: inexpensive way to get some benefit Walkthrough The author of the deliverable leads the review activity, others participate Preparation of the reviewers is optional Scenario based The sessions are open-ended Can be informal but also formal Main purposes: learning, gaining understanding, defect finding Technical Review Formal Defect detection process Main meeting is prepared Team includes peers and technical domain experts May vary in practice from quite informal to very formal Led by a moderator, which is not the author Checklists may be used, reports can be prepared Main purposes: discuss, make decisions, evaluate alternatives, find defects, solve technical problems and check conformance to specifications and standards.

Inspection Formal process, based on checklists, entry and exit criteria Dedicated, precise roles Led by the moderator Metrics may be used in the assessment Reports, list-of-findings are mandatory Follow-up process Main purpose: find defects 58

3.2.4 Success factors for reviews Clear objective is set Appropriate experts are involved Identify issues, not fix them on-the-spot Adequate psychological handling (author is not punished for the found defects) Level of formalism is adapted to the concrete situation Minimal preparation and training Management encourages learning, process improvement Time-boxing is used to determine time allocated to each part of the document to be reviewed Use of effective checklists

59

3.3 Static analysis by tools Performed without executing the examined software, but assisted by tools The approach may be data flow or control flow based Benefits: early defects detection early warnings about unwanted code complexity detects missing links improved maintainability of code and design

Typical defects discovered:


reference to an un-initialized variable never used variables unreachable code programming standards violations security vulnerabilities
60

4. Test design techniques - glossary

Test condition = item, event, attribute of a module or system that could be verified (ex: feature, structure element, transaction, quality attribute)

Test data = data that affects or is affected by the execution of the specific module
Test case [IEEE] = a set of input values, execution preconditions, expected results and execution post-conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement Test case specification [IEEE] = a document specifying a set of test cases for a test condition Test procedure (suite) specification = a document specifying a sequence of actions for the execution of a series of test cases

61

4.1 Test design test development process


1. Identify test conditions: 2. Develop test cases

Inputs: Field level Group level


Capability related: Trigger conditions Constraints or limits Interfaces to other products Validation of inputs at the following levels of aggregation: Field / action / message Record / row / window File / table / screen Database Product states Behavior rules Architectural design related: Invocation paths Communication paths Internal data conditions Design states Exceptions

Use cases are used as input Test cases will cover all possible paths of the execution graph flow Test data should be specified if necessary Priorities of Test cases should be assigned Traceability matrix (Use cases x Test cases) should be maintained

3.

Develop test procedures

Group test cases into execution schedules Factors to be considered: a. Prioritization b. Logical dependencies c. Regression tests

Traceability from test condition to the specifications (requirements) is a must

Risk analysis is a best practice

62

4.2 Test design categories of test design techniques Black box: no internal structure knowledge will be used White box: based on the analysis of the internal structure Static: without running, exercised on specific project artifacts

Each black box or white box test technique has:


A method (how to do it) A test case design approach (how to create test cases using the approach) A measurement technique coverage % (except the black box Syntax testing) Other taxonomy: Specification based: test cases are built from the specifications of the module Structure based: information about the module is constructed (design, code) is used to derive the test cases Experience based: testers knowledge about the specific domain, about the likely defects, is used

63

4.3.1 Black box techniques equivalence partitioning To minimize testing, partition input (output) values into groups of equivalent values (equivalent from the test outcome perspective) Select a value from each equivalence class as a representative value

If an input is a continuous range of values, then there is typically one class of valid values and two classes of invalid values, one below the valid class and one above it. Example: Rule for hiring a person is second its age: 0 15 = do not hire

16 17 = part time
18 54 = full time 55 -- 99 = do not hire Which are the valid equivalence classes? And the invalid ones? Give examples of representative values! (other examples)
(see Lee Copeland b2 chap.3, Cem Kaner c1, Paul Jorgensen b7 chap.2.2,6.3)
64

4.3.1 Black box techniques all-pairs testing In practice, there are situations when a great number of combinations must be tested. For example: A Web site must operate correctly with different browsers Internet Explorer 5.0, 5.5, and 6.0, Netscape 6.0, 6.1, and 7.0, Mozilla 1.1, and Opera 7; using different plug-insRealPlayer, MediaPlayer, or none; running on different client operating systemsWindows 95, 98, ME, NT, 2000, and XP; receiving pages from different serversIIS, Apache, and WebLogic; running on different server operating systemsWindows NT, 2000, and Linux. Test environment combinations: 8 browsers 3 plug-ins 6 client operating systems 3 servers

3 server OS
1,296 combinations ! All-pairs testing is the solution : tests a significant subset of variables pairs.
65

4.3.2 Black box techniques boundary value analysis Boundaries = edges of the equivalence classes. Boundary values = values at the edge and nearest to the edge The steps for using boundary values:

First, identify the equivalence classes.


Second, identify the boundaries of each equivalence class. Third, create test cases for each boundary value by choosing one point on the boundary, one point just below the boundary, and one point just above the boundary. "Below" and "above" are relative terms and depend on the data value's units For the previous example: boundary values are {-1,0,1}, {14,15,16},{15,16,17},{16,17,18}{17,18,19}, {54,55,56},{98, 99, 100} Or, omitting duplicate values: {-1,0,1,14,15,16,17,18,19,54,55,56,98,99,100} (other examples)
66

(see Lee Copeland b2 chap.4, Paul Jorgensen b7 chap5)

4.3.3 Black box techniques decision tables

Conditions represent various input conditions

Actions are the actions taken depending on the various combinations of input conditions Each of the rules defines a unique combination of conditions that result in the execution of the actions associated with that rule

Actions do not depend on the condition evaluation order, but only on their values.
Actions do not depend on any previous input conditions or system state.
67

(see Lee Copeland b2 chap 5, Paul Jorgensen b7 chap 7)

4.3.3 Black box techniques decision tables - example a, b, c are they the edges of a triangle?

(however, some additional test cases are needed)

68

4.3.4 Black box techniques state transition tables Allow the tester to interpret the system in term of: States Transition between states

Events that trigger transitions


Actions resulting from the transitions

Transition table used:

(see Lee Copeland b2, chap 7)

69

4.3.4 Black box techniques state transition tables - example Ticket buy - web application

Exercise: Fill-in the transition table!


70

4.3.5 Black box techniques requirements based testing

Best practices: Validate requirements (what) against objectives (why)

Apply use cases against requirements


Perform ambiguity reviews Involve domain experts in requirements reviews Create cause-effect diagrams Check logical consistency of test scenarios Validate test scenarios with domain experts and users Walk through scenarios comparing with design documents

Walk through scenarios comparing with code

71

4.3.5 Black box techniques scenario testing

Good scenario attributes: Is based on a real story Is motivating for the tester Is credible Involves an enough complex use of environment and data Is easy to evaluate ( no need for external oracle) How to create good test scenarios:

Write down real-life stories


List possible users, analyze their interests and objectives Consider also inexperienced or ostile users List system benefits and create paths to access those features Watch users using old versions of the system or an analog system Study complaints about other analog systems
72

4.3.5 Black box techniques use case testing

Generating the Test Cases from the Use Cases

Steps: 1. Identify the use-case scenarios. 2. For each scenario, identify one or more test cases. 3. For each test case, identify the conditions that will cause it to execute. 4. Complete the test case by adding data values. (see example)

Most common test case mistakes: 1. Making cases too long 2. Incomplete, incorrect, or incoherent setup 3. Leaving out a step 4. Naming fields that changed or no longer exist 5. Unclear whether tester or system does action 6. Unclear what is a pass or fail result 7. Failure to clean up
73

4.3.5 Black box techniques Syntax testing

Syntax testing = uses a model of the formally-defined syntax of the inputs to a Component The syntax is represented as a number of rules each of which defines the possible means of production of a symbol in terms of sequences of, iterations of, or selections between other symbols. Here is a representation of the syntax for the floating point number, float in Backus Naur Form (BNF) :

float = int "e" int. int = ["+"|"-"] nat. nat = {dig}. dig = "0"|"1"|"2"|"3"|"4"|"5"|"6"|"7"|"8"|"9".

Syntax testing is the only black box technique without a coverage metric assigned.
74

4.4 White box techniques Control flow


Modules of code are converted to graphs, the paths through the graphs are analyzed, and test cases are created from that analysis. There are different levels of coverage. Process Blocks Decision Point Junction Point

A process block is a sequence of program statements that execute sequentially.


No entry into the block is permitted except at the beginning. No exit from the block is permitted except at the end. Once the block is initiated, every statement within it will be executed sequentially.

A decision point is a point in the module at which the control flow can change. Most decision points are binary and are implemented by if-then-else statements. Multi-way decision points are implemented by case statements. They are represented by a bubble with one entry and multiple exits.

A junction point is a point at which control flows join together

Example:

(see Lee Copeland b2, chap.10)

75

4.4.1 White box techniques statement coverage Statement coverage = Executed statements / Total executable statements

Example:

a;
if (b) { c; } d;

In case b is TRUE, executing the code will result in 100% statement coverage

76

4.4.1 White box techniques statement coverage - exercise

Given the code:


a; if (x) { b; if (y) { c; } else { d; } } else { e; }
77

x y

T T

T F

F T

F F

a
b c

a
b

d e e

How many test cases are needed to get 100% statement coverage?

4.4.2 White box techniques branch & decision coverage - glossary Branch = a conditional transfer of control from a statement to any other statement
OR

= an unconditional transfer of control from a statement to any other statement except the next statement;

Branch coverage = executed branches / total branches

Decision coverage = executed decisions outcomes / total decisions For components with one entry point 100% Branch Coverage is equivalent to 100% Decision Coverage

78

4.4.2 White box techniques branch & decision coverage - example

Decisions = B2, B3, B5 each with 2 outcomes = 3 * 2 = 6 Branches = (how many arrows?) = 10
Q1. What are the decision and branch coverage for (B1 B2 B9) ?

Q2. But for (B1->B2->B3->B4->B8->B2->B3->B5->B6->B8->B2>B3->B5->B7) ?


Answers: 1. 1/6, 2/10 2. 5/6, 9/10

79

4.4.2 White box techniques LCSAJ coverage

LCSAJ = Linear Code Sequence and Jump Defined by a triple, conventionally identified by line numbers in a source code listing: the start of the linear code sequence the end of the linear code sequence the target line to which control flow is transferred LCSAJ coverage = executed LCSAJ sequences / total nr. of LCSAJ seq.

80

4.4.3 White box techniques data flow coverage Just as one would not feel confident about a program without executing every statement in it as part of some test, one should not feel confident about a program without having seen the effect of using the value produced by each and every computation.

Data flow coverages:


All defs = Number of exercised definition-use pairs / Number of variable definitions All c(omputation)-uses = Number of exercised definition- c-use pairs / Number of definition- c-use pairs

All p(redicate)-uses = Number of exercised definition- p-use pairs / Number of definition- p-use pairs
All uses = Number of exercised definition- use pairs / Number of definition- use pairs Branch condition = Boolean operand values executed / Total Boolean operand values Branch condition combination = Boolean operand values combinations executed / Total Boolean operand values combinations (see Lee Copeland b2, chap.11)
81

4.5 Exploratory testing Exploratory testing = Concurrent test design, test execution, test logging and learning, based on a quick test charter containing objectives, and executed within delimited time-intervals Uses structured approach to error guessing, based on experience, available defect data, domain expertise On-the-fly design of tests that attack these potential errors Skill, intuition and previous experience is vital Test strategy is built around: The project environment Quality criteria defined for the project Elements of the product

Risk factors

82

4.6 Choosing test techniques

Factors used to choose:


Product or system type Standards Products requirements Available documentation Determined risks Schedule constraints Cost constraints Used software development life cycle model Testers skills and (domain) experience

(additional materials: Unit Test design, exercises)

83

5.1.1 Test organization & independence Options : independent team or not?

Pluses:

Testers are not influenced by the other project members


Can act as the customers voice More objectivity in evaluating the product quality issues

Minuses: Risk of isolation from the development team Communication issues

Developers can loose the quality ownership attribute

84

5.1.2 Tasks of the test leader Plan, estimates test effort, collaborates with project manager Elaborates the test strategy Initiate test specification, implementation, execution

Set-up configuration management of test environment & deliverables


Monitors and controls the execution of tests Chooses suitable test metrics Decides if and to what degree to automate the tests Select tools Schedule tests Prepare summary test reports

Evaluate test measurements

85

5.1.2 Tasks of the tester Test Analyst Identify test objectives (targets) Review product requirements and software specifications Review test plans and test cases Verifies requirements to test cases traceability Define test scenario details Tester Define test approach (procedure) Write test cases Review test cases (peer review) Implement and execute tests
86

Test Designer Define test approach (procedure) Structure test implementation

Elaborates test case lists and writes main test cases Assesses testability

Define testing environment details

Compares test results with test oracle Assesses test risks

Gather test measures

Record defects, prepare defect reports

5.2.1-5.2.2-5.2.3 Test planning


Test plan = A document describing the scope, approach, resources and schedule of intended test activities It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and test measurement techniques to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process IEEE 829: Test plan contents: Test plan identifier Introduction Test items Features to be tested Features not to be tested Approach Item pass / fail criteria Suspension criteria & resumption criteria Test deliverables Testing tasks Environment Responsibilities Staffing and training needs Schedules Risks and contingencies Approvals

87

5.2.1-5.2.2-5.2.3 Test planning

Determine scope o Study project documents, used software life-cycle specifications, product desired quality attributes o Identify and communicate with other stakeholders o Clarify test process expectations Determine risks o Choose quality risk analysis method (e.g. FMEA) o Document the list of risks, probability, impact, priority, identify mitigation actions

Refine plan o Define roles detailed responsibilities o Select test strategy, test levels: Test strategy issues (alternatives): Preventive approach Reactive approach Risk-based Model (standard) based

Choosing testing techniques (white and/or black box)


o Select metrics to be used for defect tracking, coverage, monitoring

Estimate testing effort, determine costs, develop schedule o Define necessary roles o Decompose test project into phases and tasks (WBS) o Schedule tasks, assign resources, set-up dependencies o Develop a budget o Obtain commitment for the plan from the stakeholders

Define entry and exit criteria Exit criteria: Coverage measures Defect density or trend measures Cost Residual risk estimation 88 Time or market based

5.2.4 Test estimation Two approaches: based on metrics (historical data) made by domain experts

Testing effort depends on: product characteristics (complexity, specification) development process (team skills, tools, time factors) defects discovered and rework involved failure risk of the product (likelihood, impact)

Time for confirmation testing and regression testing must be considered too

89

5.2.5 Test strategies


Test approach (test strategy) = The chosen approaches and decisions made that follow from the test project's and test team's goal or mission.
The mission is typically effective and efficient testing, and the strategies are the general policies, rules, and principles that support this mission. Test tactics are the specific policies, techniques, processes, and the way testing is done. One way to classify test approaches or strategies is based on the point in time at which the bulk of the test design work is begun: Preventative approaches, where tests are designed as early as possible. Reactive approaches, where test design comes after the software or system has been produced. Or, another taxonomy: Analytical - such as risk-based testing Model-based - such as stochastic testing Methodical - such as failurebased, experience-based Process- or standard-compliant Dynamic and heuristic - such as exploratory testing Consultative Regression-averse
90

5.3 Test progress monitoring, reporting & control

Monitoring - Test metrics used: Test cases (% passed/ % failed) Defects (found, fixed/found, density, trends) Test Coverage (% executed Test cases)

Control: identify and implement corrective actions for: Testing process Other software life-cycle activities Possible corrective actions: Assign extra resource Re-allocate resource Adjust the test schedule Arrange for extra test environments Refine the completion criteria

Reporting: Defects remaining Coverage metrics

Identified risks

91

5.4 Configuration management IEEE definition of Configuration Management: A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements Configuration Management: identifies the current configuration (hardware, software) in the life cycle of the system, together with any changes that are in course of being implemented. provides traceability of changes through the lifecycle of the system. permits the reconstruction of a system whenever necessary

Only persistent objects must be subject to Configuration Management, therefore, the data processed by a system cannot be placed under Configuration Management.
Related to Version Control and Change Control
92

5.4 Configuration management Configuration Management activities: Configuration identification = selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation Configuration control = evaluation, co-ordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification Status accounting = recording and reporting of information needed to manage a configuration effectively, including: a listing of the approved configuration identification, the status of proposed changes to the configuration, and the implementation status of the approved changes Configuration auditing = The function to check that the software product matches the configuration items identified previously
93

5.4 Configuration management

In Testing, Configuration Management must: Identify all test-ware items Establish and maintain the integrity of the testing deliverables (test plans, test cases, documentation) through the project life cycle

Set and maintain the version of these items


Track the changes of these items Relate test-ware items to other software development items in order to maintain traceability Reference clearly all necessary documents in the test plans and test cases

94

5.5 Risk & Testing Risk = a factor that could result in future negative consequences, expressed as likelihood and impact

Project risks (supplier related, organizational, technical)

Product risks (defects delivered, poor quality attributes (reliability, usability, performance)

The risks identified can be used to: Define the test strategy and techniques to be used Define the extent and depth of testing Prioritize test cases and procedures (find important defects early)

Determine if review or training activities could help

95

5.6 Incident management

Incident = any significant, unplanned event that occurs during testing that requires subsequent investigation and / or correction The system does not function as expected Actual results differ from expected results Required features are missing Incident reports can be raised against: documents placed under review process products defects related to functional & non-functional requirements documentation anomalies (manuals, on-line help) test-ware defects (errors in test cases or test procedures) The incident reports raised against products defects are named also bug reports.
96

5.6 Incident management Recommended Bug report format Defect ID Component name and Build version Bug statuses Issued just been reported Opened programmer is working to solve-it

Reported by and Date


Error type Severity Priority Summary and detailed description Attached documents

Fixed programmer thinks thats repaired


Not solved tester retested but the bug is not solved Deferred programmer or PM decided to postpone the decision Not-a-bug programmer or tester discovered that it is not a defect Closed bug is solved and verified

(Exercise)

97

6.1.1 Test tool classification Management of testing: Test management Requirements management Test execution: Record and play Unit test framework

Bug tracking
Configuration management Static testing: Review support Static analysis Modeling Test specification:

Result comparators
Coverage measurement Security Performance and monitoring: Dynamic analysis Load and stress testing Monitoring

Test design
Test data preparation

Specific application areas (TTCN-3)


Other tools

98

6.1.2 Tool support - Management of testing Test management: Manage testing activities Manage test-ware traceability Configuration management Individual support: Version and change control

Test result reporting


Test metrics tools Requirements management: Checking Traceability Coverage Bug tracking

Builder
Project related Department or company related

99

6.1.3 Tool support - Static testing Review support: Process support Communications support Team support Static analysis: Coding standards

WEB site structure


Metrics Modeling: SQL database management

100

6.1.4 Tool support Test specification

Test design: From requirements From design models Test stubs and driver generators

Test data preparation

101

6.1.5 Tool support Test execution and logging

Record and play Scripting Unit test framework Test harness frameworks Result comparators Coverage measurement

Security testing support

102

6.1.6 Tool support Performance and monitoring

Dynamic analysis: Time dependencies Memory leaks Load testing Stress testing Monitoring

103

6.2.1 Tool support benefits

Repetitive work is reduced (e.g. running regression tests, re-entering the same test data, and checking against coding standards). Greater consistency and repeatability (e.g. tests executed by a tool, and tests derived from requirements). Objective assessment (e.g. static measures, coverage and system behavior). Ease of access to information about tests or testing (e.g. statistics and graphs about test progress, incident rates and performance).

104

6.2.1 Tool support risks

Unrealistic expectations for the tool (including functionality and ease of use). Underestimating the time, cost and effort for the initial introduction of a tool (including training and external expertise). Underestimating the time and effort needed to achieve significant and continuing benefits from the tool (including the need for changes in the testing process and continuous improvement of the way the tool is used). Underestimating the effort required to maintain the test assets generated by the tool. Over-reliance on the tool (replacement for test design or where manual testing would be better). Lack of a dedicated test automation specialist Lack of good understanding and experience with the issues of test automation Lack of stakeholders commitment for the implementation of a such tool
105

6.2.2 Tool support special considerations

Test execution tools: Significant implementation effort Record and play tools are instable when changes occur Technical expertise is mandatory Performance testing tools: Expertise in design and results interpretation are mandatory

Static analysis tools:


Lots of warnings generated Build management sensitive Test management tools: Interfacing with other tools (Windows Office, at least) is critical

106

6.2.2 Tool support testing in Visual Studio Team System


Developer: use Test Driven Development methods manage Unit Testing

analyze code coverage


use code static analysis use code profiler to handle performance issues

Tester:
manage test cases manage test suites manage manual testing

manage bug tracking


record / play WEB tests run load tests report test results
107

6.2.2 Introducing a tool into an organization

Tool selection process: Identify requirements Identify constraints Check available tools on the market (feature evaluation) Evaluate short list (feature comparison): Demos

Quick pilots
Select a tool

Note: there are many free testing tools available, some of them also online ( www.testersdesk.com )

108

ISTQB Foundation Exam guidelines

40 multiple (4) choice questions 1 hour exam


Score >= 65% (>=26 good answers) to pass 50% K1, 30% K2, 20% K3 Chapter 1 - 7 questions Chapter 2 6 questions Chapter 3 3 questions Chapter 4 12 questions Chapter 5 8 questions Chapter 6 4 questions

K1: The candidates will recognize, remember and recall a term or concept.
K2: The candidates can select the reasons or explanations for statements related to the topic. They can summarize, compare, classify and give examples for concepts of testing. K3: The candidates can select the correct application of a concept or techniques and/or apply it to a given context.

Example: (see others) Which statement regarding testing is correct? a) Testing is planning, specifying and executing a program with the aim of finding defects b) Testing is the process of correcting defects identified in a developed program c) Testing is to localize, analyze and correct the direct defect cause d) Testing is independently reviewing a system against its requirements

109

Das könnte Ihnen auch gefallen