Sie sind auf Seite 1von 18

Bottom-Up Integration Testing

After unit testing of individual components the components are combined together into a system. Bottom-Up Integration: each component at lower hierarchy is tested individually; then the components that rely upon these are tested. Component Driver: a routine that simulates a test call from parent component to child component

Bottom-Up Testing Example


A

1) Test B, C individually (using drivers) 2) Test A such that it calls B If an error occurs we know that the problem is in A or in the interface between A and B 3) Test A such that it calls C If an error occurs we know that the problem is in A or in the interface between A and C (-) Top level components are the most important yet tested last.

Top-Down Testing Example


A

1) Test A individually (use stubs for B and C) 2) Test A such that it calls B (stub for C) If an error occurs we know that the problem is in B or in the interface between A and B 3) Test A such that it calls C (stub for B) If an error occurs we know that the problem is in C or in the interface between A and C * Stubs are used to simulate the activity of components that are not currently tested; (-) may require many stubs

Big-Bang Integration
After all components are unit testing we may test the entire system with all its components in action. (-) may be impossible to figure out where faults occur unless faults are accompanied by component-specific error messages

Sandwich Integration
Multi-level component hierarchy is divided into three levels with the test target being in the middle: - Top-down approach is used in the top layer; - Bottom-down approach used in the lower layer.

Sync & Stabilize Approach

Integration Strategies Comparison

Testing Object Oriented Systems


Objects tend to be small and simple while the complexity is pushed out into the interfaces. Hence unit testing tends to be easy and simple, but the integration testing complex and tedious. (-) Overridden virtual methods require thorough testing just as the base class methods.

Testing Object Oriented Systems

Test-Related Activities
1) Define test objectives (i.e. what qualities of the system you want to verify) 2) Devise test cases 3) Program (and verify!) the tests 4) Execute tests 5) Analyze the results

Test Plan
A document that describes the criteria and the activities necessary for showing that the software works correctly. A test plan is well-defined when upon completion of testing every interested party can recognize that the test objectives has been met.

Test Plan Contents


What automated test tools are used (if any) Methods for each stage of testing (e.g. code walk-through for unit testing, top-down integration, etc.) Detailed list of test cases for each test stage How test data is selected or generated How output data & state information is to be captured and analyzed

Static & Dynamic Analysis


Static Analysis: uncovers minor coding violations by analyzing code at compile time (e.g. compiler warnings, FxCop) Dynamic Analysis: runtime monitoring of code (e.g. profiling, variable monitoring, asserts, tracing, exceptions, runtime type information, etc)

Automated Testing
Stub & driver generation Unit test cases generation Keyboard input simulation / terminal user impersonation Web request simulation / web user impersonation Code coverage calculation / code path tracing Comparing output to the desired

When to Stop Testing?


Run out of time, money? Curiously, the more faults we find at the beginning of testing the greater the probability that we will find even more faults if we keep testing further!

Finding and testing all possible path combinations though the code?

Fault Seeding
Faults are intentionally inserted into the code. Then the test process is used to locate them. The idea is that: Detected seeded faults/total seeded faults = detected nonseeded faults/total nonseeded faults Thus we can obtain a measure of effectiveness of the test process and use it to adjust test time and thoroughness.

Confidence in the Software


The likelihood that the software is fault-free: C = S/(S-N+1), if n <= N C = 1, if n > N Where S the number of seeded faults, N the expected number of indigenous faults, n number of indigenous faults found.

Read
Chapter 8 from the white book.

Das könnte Ihnen auch gefallen