Sie sind auf Seite 1von 7

1.

3 Testing Approaches

There are two major approaches to testing:

Black-box (or closed box or data-driven or input/output driven or behavioral)


testing

White-box (or clear box or glass box or logic-driven) testing

black-box”
• the inputs have to be given to observe the behavior (output) of the program

“White-box”
• approach examines the internal structure of the program
• This method of testing exposes both errors of omission (errors due to
neglected specification)
• errors of commission (something not defined by the specification).

Exhaustive path
• testing will not address missing paths and data-sensitive errors.

In conclusion,

• It is not feasible to do exhaustive testing either in block or in white box


approaches.
• None of these approaches are superior – meaning, one has to use both
approaches, as they really complement each other.
• Not, but not the least, static testing still plays a large role in software testing.

2. Levels of Testing

In developing a large system, testing usually involves several stages (Refer the
following figure [2]).

• Unit Testing
• Integration Testing

The objective of unit and integration testing was to ensure that the code
implemented the design properly
• System Testing
• Acceptance Testing

Unit Testing (or component or module testing).

Integration Testing.This process of verifying the synergy of system components


against the program Design Specification is called

• Once the system is integrated, the overall functionality is tested against the
Software Requirements Specification (SRS)

Integration Strategies

Depending on design approach, one of the following integration strategies can be


adopted:

· Big Bang approach: Big Bang approach consists of testing each module individually
and linking all these modules together only when every module in the system has
been tested.

Example :
• Locating interface errors, if any, becomes difficult here.

· Incremental approach

• Top-down testing
• Bottom-up testing
• Sandwich testing

Top-down:testing begins with the topmost module


• stubs which simply return control to their superior modules.

Bottom-up testing begins with elementary modules.

Testing Advantages Disadvantages

• Stub modules must be produced


• Advantageous if major flaws
• Test conditions my be impossible, or very
occur toward the top of the
difficult, to create
program
Top-
Down • Observation of test output is more difficult,
• Early skeletal program allows
as only simulated values will be used initially.
demonstrations and boosts
For the same reason, program correctness
morale
can be misleading

• Advantageous if major flaws


occur toward the bottom of the
program
• Driver modules must be produced
• Test conditions are easier to
Bottom-
create
up • The program as an entity does not exist until
the last module is added
• Observations of test results is
easier (as “live” data is used
from the beginning)

System Testing
• , the overall functionality is tested against the Software Requirements
Specification (SRS). Then, the other non-functional requirements like
performance testing are done to ensure readiness of the system to work
successfully in a customer’s actual working environment

• that the system does what the customer wants it to do.


• System testing begins with function testing. Since the focus here is on
functionality, a black-box approach is taken

• Performance testing addresses the non-functional requirements

Types of Performance Tests


• Stress tests
• Volume tests
• Configuration tests
• Compatibility tests
• Regression tests
• Security tests
• Timing tests
• Environmental tests
• Quality tests
• Recovery tests
• Maintenance tests
• Documentation tests
• Human factor (or Usability) tests

Acceptance Testing
• The next step is customer’s validation of the system against User
Requirements Specification (URS). Customer in their working environment
does.

• Customers can evaluate the system either by conducting a benchmark test or


by a pilot test

benchmark test,
the system performance is evaluated against test cases that represent typical
conditions under which the system will operate when actually installed.

pilot test installs the system on an experimental basis, and the system is
evaluated against everyday working.

• Sometimes the system is piloted in-house before the customer runs the
real pilot test. The in-house test, in such case, is called an alpha test,
and the customer’s pilot is a beta test.
• A third approach, parallel testing, is done when a new system is
replacing an existing one or is part of a phased development. The new
system is put to use in parallel with previous version and will facilitate
gradual transition of users, and to compare and contrast the new system
with the old.
3. Test Techniques

We shall discuss Black Box and White Box approach.

3.1 Black Box Approach

- Equivalence Partitioning
- Boundary Value Analysis
- Cause Effect Analysis
- Cause Effect Graphing
- Error Guessing

Equivalence Partitioning

testing one value from a class is equivalent to testing any other value from that class

III. Cause Effect Analysis

The main drawback of the previous two techniques is that they do not explore the
combination of input conditions.

IV. Cause Effect Graphing

This is a rigorous approach, recommended for complex systems only

• Link causes and effects in a Boolean graph which is the cause-effect graph.
• A cause is an input condition or an equivalence class of input conditions
• An effect is an output condition or a system transformation.

White Box Approach

- Basis Path Testing

• Thus the method attempts statement coverage, decision coverage and


condition coverage

4. When to stop testing

• Stop When

All test cases, derived from equivalent partitioning, cause-effect analysis &
boundary-value analysis, are executed without detecting errors.
Drawbacks

• Rather than defining a goal & allowing the tester to select the most appropriate
way of achieving it, it does the opposite !!!

• Defined methodologies are not suitable for all occasions !!!

• No way to guarantee that the particular methodology is properly & rigorously used

• Depends on the abilities of the tester & not quantification attempted !

• Completion Criterion Based On The Detection Of Pre-Defined Number Of


Errors

How To Determine "Number Of Predefined Errors " ?

Predictive Models

• Based on the history of usage / initial testing & the errors found

Defect Seeding Models

• Based on the initial testing & the ratio of detected seeded errors to detected
unseeded errors

(Very critically depends on the quality of 'seeding')

Using this approach, as an example, we can say that testing is complete if 80% of
the pre-defined number of errors are detected or the scheduled four months of
testing is over, whichever comes later.

Caution !

The Above Condition May Never Be Achieved For The Following Reasons

• Over Estimation Of Predefined Errors

(The Software Is Too Good !!)

• Inadequate Test Cases


Hence a best completion criterion may be a combination of all the methods discussed

Module Test

• Defining test case design methodologies (such as boundary value analysis...)

Function & System Test

• Based on finding the pre-defined number of defects

5. Debugging

• Breakpoints

• Desk Checking

• Dumps

• Single-Step Operation

• Traces

Das könnte Ihnen auch gefallen