Sie sind auf Seite 1von 13

Software Testing

Milan Springl
SENG 621 - Software Process Management

University of Calgary
Alberta, Canada T2N 1N4
springl@cpsc.ucalgary.ca

or

mspring@nortel.com
http://sern.cpsc.ucalgary.ca/~springl/

ABSTRACT
This paper looks at what Software Testing is and then briefly explains some of the more common methods
and techniques used in the testing process. First we look at the traditional (procedural) environment and
then the Object Oriented environment. Comparisons are made (where appropriate) between the two models
and methods and techniques used. Lastly some issues with testing under both models are discussed.

TABLE OF CONTENTS
1. INTRODUCTION

2. SOFTWARE TESTING METHODS

2.1. Test Case Design

2.2. White-Box Testing

2.3. Basis Path Testing

2.4. Control Structure Testing

2.5. Black-Box Testing

3. SOFTWARE TESTING STRATEGIES

3.1. Unit Testing

3.2. Integration Testing

3.2.1. Top-Down Strategy

3.2.2. Bottom-Up Strategy

1
3.2.3. Big-Bang Strategy

3.3. Function Testing

3.4. System Testing

4. SOFTWARE TESTING METRICS

5. OBJECT ORIENTED TESTING METHODS

5.1. OO Test Case Design

5.2. Fault Based Testing

5.3. Class Level Methods

5.3.1. Random Testing

5.3.2. Partition Testing

5.4. Scenario-based Testing

6. OBJECT ORIENTED TESTING STRATEGIES

6.1. OO Unit Testing

6.2. OO Integration Testing

6.3. OO Function Testing and OO System Testing

7. OBJECT ORIENTED TESTING METRICS

8. ISSUES

8.1. Procedural Software Testing Issues

8.2. OO software Testing Issues

9. CONCLUSION

10. REFERENCES

Back to Top

1. INTRODUCTION
Software has infiltrated almost all areas in the industry and has over the years become more and more wide
spread as a crucial component of many systems. System failure in any industry can be very costly and in
the case of critical systems (fight control, nuclear reactor monitoring, medical applications, etc.) it can
mean lost human lives. These "cost" factors call for some kind of system failure prevention. One way to
ensure system's reliability is to extensively test the system. Since software is a system component, it
requires a testing process also.

Software testing is a critical component of the software engineering process. It is an element of software
quality assurance and can be described as a process of running a program in such a manner as to uncover

2
any errors. This process, while seen by some as tedious, tiresome and unnecessary, plays a vital role in
software development.

The process of software testing involves creating test cases to "break the system" but before these can be
designed, a few principles have to be observed:

Testing should be based on user requirements. This is in order to uncover any defects that might cause the
program or system to fail to meet the client's requirements.

Testing time and resources are limited. Avoid redundant tests.

It is impossible to test everything. Exhaustive tests of all possible scenarios are impossible, simple because
of the many different variables affecting the system and the number of paths a program flow might take.

Use effective resources to test. This represents use of the most suitable tools, procedures and individuals to
conduct the tests. The test team should use tools that they are confident and familiar with. Testing
procedures should be clearly defined. Testing personnel may be a technical group of people independent of
the developers.

Test planning should be done early. This is because test planning can begin independently of coding and as
soon as the client requirements are set.

Testing should begin at the module. The focus of testing should be concentrated on the smallest
programming units first and then expand to other parts of the system.

We look at software testing in the traditional (procedural) sense and then describe some testing strategies
and methods used in Object Oriented environment. We also introduce some issues with software testing in
both environments.

Back to Top

2. SOFTWARE TESTING METHODS


There are many ways to conduct software testing, but the most common methods rely on the following
steps.

2.1. Test Case Design


Test cases should be designed in such a way as to uncover quickly and easily as many errors as possible.
They should "exercise" the program by using and producing inputs and outputs that are both correct and
incorrect.

Variables should be tested using all possible values (for small ranges) or typical and out-of-bound values
(for larger ranges). They should also be tested using valid and invalid types and conditions. Arithmetical
and logical comparisons should be examined as well, again using both correct and incorrect parameters.

The objective is to test all modules and then the whole system as completely as possible using a reasonably
wide range of conditions.

2.2. White-Box Testing

3
White box method relies on intimate knowledge of the code and a procedural design to derive the test
cases. It is most widely utilized in unit testing to determine all possible paths within a module, to execute
all loops and to test all logical expressions.

Using white-box testing, the software engineer can (1) guarantee that all independent paths within a
module have been exercised at least once; (2) examine all logical decisions on their true and false sides; (3)
execute all loops and test their operation at their limits; and (4) exercise internal data structures to assure
their validity (Pressman, 1997).

This form of testing concentrates on the procedural detail. However, there is no automated tool or testing
system for this testing method. Therefore even for relatively small systems, exhaustive white-box testing is
impossible because of all the possible path permutations.

2.3. Basis Path Testing


Basis path testing is a white-box technique. It allows the design and definition of a basis set of execution
paths. The test cases created from the basis set allow the program to be executed in such a way as to
examine each possible path through the program by executing each statement at least once (Pressman,
1997).

To be able to determine the different program paths, the engineer needs a representation of the logical flow
of control. The control structure can be illustrated by a flow graph. A flow graph can be used to represent
any procedural design.

Figure 1 Flow graph of an 'If-then-else' statement

Next a metric can be used to determine the number of independent paths. It is called cyclomatic complexity
and it provides the number of test cases that have to be designed. This insures coverage of all program
statements (Pressman, 1997).

2.4. Control Structure Testing

4
Because basis path testing alone is insufficient, other techniques should be utilized.

Condition testing can be utilized to design test cases which examine the logical conditions in a program. It
focuses on all conditions in the program and includes testing of both relational expressions and arithmetic
expressions.

This can be accomplished using branch testing and/or domain testing methods. Branch testing executes
both true and false branches of a condition. Domain testing utilizes values on the left-hand side of the
relation by making them greater than, equal to and less then the right-hand side value. This method test
both values and the relation operators in the expression.

Data flow testing method is effective for error protection because it is based on the relationship between
statements in the program according to the definition and uses of variables.

Loop testing method concentrates on validity of the loop structures.

2.5. Black-box Testing


Black box on the other hand focuses on the overall functionality of the software. That is why it is the
chosen method for designing test cases used in functional testing. This method allows the functional testing
to uncover faults like incorrect or missing functions, errors in any of the interfaces, errors in data structures
or databases and errors related to performance and program initialization or termination.

To perform successful black-box test, the relationships between the many different modules in the system
model need to be understood. Next, all necessary ways of testing all object relationships need to be defined.
For this, a graph representing all the objects can be constructed. Each object is represented by a node and
then links between the nodes show the direct node-to-node relationship. An arrow on the link shows the
direction of the relationship. Each node and link is the further described by node weight or link weight
respectively. This method is called graph-based testing (Pressman, 1997).

Equivalence partitioning on the other hand divides input conditions into equivalence classes such as
Boolean, value, range or set of values. These classes represent a set of valid or invalid states for input
conditions (Pressman, 1997).

Boundary values analysis (BVA), as the term suggests, concentrates on designing test cases that examine
the upper and lower limits of the equivalence class. These test cases are not based solely on input
conditions as in equivalence partitioning above but on output conditions as well (Pressman, 1997).

Condition (back-to-back) testing involves development of identical but independent versions of a "critical"
program/system. Both versions are then run and tested in parallel to see if the produced outputs are
identical (Pressman, 1997).

Back to Top

3. SOFTWARE TESTING STRATEGIES


In order to conduct a proper and thorough set of tests, the types of testing mentioned below should be
performed in the order in which they are described. However, some system or hardware can happen
concurrently with software testing.

5
3.1. Unit Testing
Unit testing procedure utilizes the white-box method and concentrates on testing individual programming
units. These units are sometimes referred to as modules or atomic modules and they represent the smallest
programming entity.

Unit testing is essentially a set of path test performed to examine the many different paths through the
modules. These types of tests are conducted to prove that all paths in the program are solid and without
errors and will not cause abnormal termination of the program or other undesirable results.

3.2. Integration Testing


Integration testing focuses on testing multiple modules working together. Two basic types of integration are
usually used: top-down or bottom up.

Top down, as the term suggests, starts at the top of the program hierarchy and travels down its branches.
This can be done in either depth-first (shortest path down to the deepest level) or breadth-first (across the
hierarchy, before proceeding to the next level). The main advantage of this type of integration is that the
basic skeleton of the program/system can be seen and tested early. The main disadvantage is the use of
program stubs until the actual modules are written. This basically limits the up-flow of information and
therefore does not provide for a good test of the top level modules.

Bottom-up type of integration has the lowest level modules built and tested first on individual bases and in
clusters using test drivers. This insures each module is fully tested before its utilized by its calling module.
This method has a great advantage in uncovering errors in critical modules early. Main disadvantage is the
fact that most or many modules must be build before a working program can be presented.

Integration testing procedure can be performed in three ways: Top-down, Bottom-up, or using an approach
called "Big-Bang" (Humphrey, 1989).

3.2.1. Top-Down Strategy

Top down integration is basically an approach where modules are developed and tested starting at the top
level of the programming hierarchy and continuing with the lower levels.

It is an incremental approach because we proceed one level at a time. It can be done in either "depth" or
"breadth" manner.

• Depth means we proceed from the top level all the way down to the lowest level.
• Breadth, on the other hand, means that we start at the top of the hierarchy and then go to the next
level. We develop and test all modules at this level before continuing with another level.

Either way, this testing procedure allows us to establish a complete skeleton of the system or product.

The benefits of Top-down integration are that, having the skeleton, we can test major functions early in the
development process.

At the same time we can also test any interfaces that we have and thus discover any errors in that area very
early on.

6
But the major benefit of this procedure is that we have a partially working model to demonstrate to the
clients and the top management. This of course builds everybody’s confidence not only in the development
team but also in the model itself. We have something that proves our design was correct and we took the
correct approach to implement it.

However, there are some drawbacks to this procedure as well:

Using stubs does not permit all the necessary upward data flow. There is simply not enough data in the
stubs to feed back to the calling module.

As a result, the top level modules can not be really tested properly and every time the stubs are replaced
with the actual modules, the calling modules should be re-tested for integrity again.

3.2.2. Bottom-Up Strategy

Bottom-up approach, as the name suggests, is the opposite of the Top-down method.

This process starts with building and testing the low level modules first, working its way up the hierarchy.

Because the modules at the low levels are very specific, we may need to combine several of them into what
is sometimes called a cluster or build in order to test them properly.

Then to test these builds, a test driver has to be written and put in place.

The advantage of Bottom-up integration is that there is no need for program stubs as we start developing
and testing with the actual modules.

Starting at the bottom of the hierarchy also means that the critical modules are usually build first and
therefore any errors in these modules are discovered early in the process.

As with Top-down integration, there are some drawbacks to this procedure.

In order to test the modules we have to build the test drivers which are more complex than stubs. And in
addition to that they themselves have to be tested. So more effort is required.

A major disadvantage to Bottom-up integration is that no working model can be presented or tested until
many modules have been built (Humphrey, 1989).

This also means that any errors in any of the interfaces are discovered very late in the process.

3.2.3. Big-Bang Strategy

Big-Bang approach is very simple in its philosophy where basically all the modules or builds are
constructed and tested independently of each other and when they are finished, they are all put together at
the same time.

The main advantage of this approach is that it is very quick as no drivers or stubs are needed, thus cutting
down on the development time.

7
However, as with anything that is quickly slapped together, this process usually yields more errors than the
other two. Since these errors have to be fixed and take more time to fix than errors at the module level, this
method is usually considered the least effective.

Because of the amount of coordination that is required it is also very demanding on the resources.

Another drawback is that there is really nothing to demonstrate until all the modules have been built and
integrated.

3.3. Function Testing


Function testing is a testing process that is black-box in nature. It is aimed at examining the overall
functionality of the product. It usually includes testing of all the interfaces and should therefore involve the
clients in the process.

Because every aspect of the software system is being tested, the specifications for this test should be very
detailed describing who, where, when and how will conduct the tests and what exactly will be tested.

The portion of the testing that will involve the clients is usually conducted as an alpha test where the
developers closely monitor how the clients use the system. They take notes on what needs to be improved.

3.4. System Testing


Final stage of the testing process should be System Testing. This type of test involves examination of the
whole computer system. All the software components, all the hardware components and any interfaces.

The whole computer based system is checked not only for validity but also for met objectives.

It should include recovery testing, security testing, stress testing and performance testing.

Recovery testing uses test cases designed to examine how easily and completely the system can recover
from a disaster (power shut down, blown circuit, disk crash, interface failure, insufficient memory, etc.). It
is desirable to have a system capable of recovering quickly and with minimal human intervention. It should
also have a log of activities happening before the crash (these should be part of daily operations) and a log
of messages during the failure (if possible) and upon re-start.

Security testing involves testing the system in order to make sure that unauthorized personnel or other
systems cannot gain access to the system and information or resources within it. Programs that check for
access to the system via passwords are tested along with any organizational security procedures established.

Stress testing encompasses creating unusual loads on the system in attempts to brake it. System is
monitored for performance loss and susceptibility to crashing during the load times. If it does crash as a
result of high load, it provides for just one more recovery test.

Performance testing involves monitoring and recording the performance levels during regular and low and
high stress loads. It tests the amount of resource usage under the just described conditions and serves as
basis for making a forecast of additional resources needed (if any) in the future. It is important to note that
performance objectives should have been developed during the planning stage and performance testing is to
assure that these objectives are being met. However, these tests may be run in initial stages of production to
compare the actual usage to the forecasted figures.

Back to Top

8
4. SOFTWARE TESTING METRICS
In general testers must rely on metrics collected in analysis, design and coding stages of the development in
order to design, develop and conduct the tests necessary. These generally serve as indicators of overall
testing effort needed. High-level design metrics can also help predict the complexities associated with
integration testing and the need for specialized testing software (e.g. stubs and drivers). Cyclomatic
complexity may yield modules that will require extensive testing as those with high cyclomatic complexity
are more likely to be error prone (Pressman, 1997).

Metrics collected from testing, on the other hand, usually comprise of the number and type of errors,
failures, bugs and defects found. These can then serve as measures used to calculate further testing effort
required. They can also be used as a management tool to determine the extensity of the project's success or
failure and the correctness of the design. In any case these should be collected, examined and stored for
future needs.

5. OBJECT ORIENTED TESTING METHODS


While the jury is still out on whether "traditional" testing methods and techniques are applicable to OO
models, there seems to be a consensus that because the OO paradigm is different from the traditional one,
some alteration or expansion of the traditional testing methods is needed. The OO methods may utilize
many or just some aspects of the traditional ones but they need to broadened to sufficiently test the OO
products.

Because of inheritance and inter-object communications in OO environment, much more emphasis is


placed on the analysis and design and their "correctness" and consistency. This is imperative to prevent
analysis errors to trickle down to design and development, which would increase the effort to correct the
problem.

5.1. OO Test Case Design


Conventional test case designs are based on the process they are to test and its inputs and outputs. OO test
cases need to concentrate on the states of a class. To examine the different states, the cases have to follow
the appropriate sequence of operations in the class. Class, as an encapsulation of attributes and procedures
that can be inherited, thus becomes the main target of OO testing.

Operations of a class can be tested using the conventional white-box methods and techniques (basis path,
loop, data flow) but there is some notion to apply these at the class level instead.

5.2. Fault Based Testing


This type of testing allows for designing test cases based on the client specification or the code or both. It
tries to identify plausible faults (areas of design or code that may lead to errors). For each of these faults a
test case is developed to "flush" the errors out. These tests also force each line of code to be executed
(Marick, 1995a).

This testing method does not find all types of errors, however. Incorrect specifications and interface errors
can be missed. You may remember that these types of errors can be uncovered by function testing in the
traditional model. In OO model, interaction errors can be uncovered by scenario-based testing. This form
of OO testing can only test against the client's specifications, so interface errors are still missed.

9
5.3. Class Level Methods
As mentioned above, a class (and its operations) is the module most concentrated on in OO environments.
From here it should expand to other classes and sets of classes. Just like traditional models are tested by
starting at the module first and continuing to module clusters or builds and then the whole program.

5.3.1. Random Testing

This is one of methods used to exercise a class. It is based on developing a random test sequence that tries
the minimum number of operations typical to the behavior of the class.

5.3.2. Partition Testing

This method categorizes the inputs and outputs of a class in order to test them separately. This minimizes
the number of test cases that have to be designed.

To determine the different categories to test, partitioning can be broken down as follows:

• State-based partitioning - categorizes class operations based on how they change the state of a
class
• Attribute-based partitioning - categorizes class operations based on attributes they use
• Category-based partitioning - categorizes class operations based on the generic function the
operations perform

5.4. Scenario-based Testing


This form of testing concentrates on what the user does. It basically involves capturing the user actions and
then simulating them and similar actions during the test. These tests tend to find interaction type of errors
(Marick, 1995b).

Back to Top

6. OBJECT ORIENTED TESTING STRATEGIES


Testing strategies is one area of software testing where the traditional (procedural) and OO models follow
the same path. In this case testing is started with unit testing and then continued with integration testing,
function testing and finally system testing. The meaning of the individual strategies had to be adjusted,
however.

6.1. OO Unit Testing


In OO paradigm it is no longer possible to test individual operations as units. Instead they are tested as part
of the class and the class or an instance of a class (object) then represents the smallest testable unit or
module. Because of inheritance, testing individual operation separately (independently of the class) would
not be very effective, as they interact with each other by modifying the state of the object they are applied
to (Binder, 1994) .

6.2. OO Integration Testing

10
This strategy involves testing the classes as they are integrated into the system. The traditional approach
would test each operation separately as they are implemented into a class. In OO system this approach is
not viable because of the "direct and indirect interactions of the components that make up the class"
(Pressman, 1997).

Integration testing in OO can be performed in two basic ways (Binder, 1994):

• Thread-based - Takes all the classes needed to react to a given input. Each class is unit tested and
then thread constructed from these classes tested as a set.
• Uses-based - Tests classes in groups. Once the group is tested, the next group that uses the first
group (dependent classes) is tested. Then the group that uses the second group and so on. Use of
stubs or drivers may be necessary.

Cluster testing is similar to testing builds in the traditional model. Basically collaborating classes are tested
in clusters.

6.3. OO Function Testing and OO System Testing


Function testing of OO software is no different than validation testing of procedural software. Client
involvement is usually part of this testing stage. In OO environment use cases may be used. These are
basically descriptions of how the system is to be used.

OO system testing is really identical to its counterpart in the procedural environment.

Back to Top

7. OBJECT ORIENTED TESTING METRICS


Testing metrics can be grouped into two categories: encapsulation and inheritance (Pressman, 1997).

Encapsulation

Lack of cohesion in methods (LCOM) - The higher the value of LCOM, the more states
have to be tested.

Percent public and protected (PAP) - This number indicates the percentage of class
attributes that are public and thus the likelihood of side effects among classes.

Public access to data members (PAD) - This metric shows the number of classes that
access other class's attributes and thus violation of encapsulation

Inheritance

Number of root classes (NOR) - A count of distinct class hierarchies.

Fan in (FIN) - FIN > 1 is an indication of multiple inheritance and should be avoided.

Number of children (NOC) and depth of the inheritance tree (DIT) - For each
subclass, its superclass has to be re-tested.

11
The above metrics (and others) are different than those used in traditional software testing, however,
metrics collected from testing should be the same (i.e. number and type of errors, performance metrics,
etc.).

Back to Top

8. ISSUES
Invariable there will be issues with software testing under both models. This is simply because both
environments are dynamic and have to deal with ongoing changes during the life cycle of the project. That
means changes in specifications, analysis, design and development. All of these of course affect testing.
However, we will concentrate on possible problem areas within the testing strategies and methods. We will
examine how these issues pertain to each environment.

8.1. Procedural Software Testing Issues


Software testing in the traditional sense can miss a large number of errors if used alone. That is why
processes like Software Inspections and Software Quality Assurance (SQA) have been developed.
However, even testing all by itself is very time consuming and very costly. It also ties up resources that
could be used otherwise. When combined with inspections and/or SQA or when formalized, it also
becomes a project of its own requiring analysis, design and implementation and supportive
communications infrastructure. With it interpersonal problems arise and need managing.

On the other hand, when testing is conducted by the developers, it will most likely be very subjective.
Another problem is that developers are trained to avoid errors. As a result they may conduct tests that prove
the product is working as intended (i.e. proving there are no errors) instead of creating test cases that tend
to uncover as many errors as possible.

8.2. OO Software Testing Issues

A common way of testing OO software testing-by-poking-around (Binder, 1995). In this case the
developer's goal is to show that the product can do something useful without crashing. Attempts are made
to "break" the product. If and when it breaks, the errors are fixed and the product is then deemed "tested".

Testing-by-poking-around method of testing OO software is, in my opinion, as unsuccessful as random


testing of procedural code or design. It leaves the finding of errors up to a chance.

Another common problem in OO testing is the idea that since a superclass has been tested, any subclasses
inheriting from it don't need to be.

This is not true because by defining a subclass we define a new context for the inherited attributes. Because
of interaction between objects, we have to design test cases to test each new context and re-test the
superclass as well to ensure proper working order of those objects (Binder, 1995).

Yet another misconception in OO is that if you do proper analysis and design (using the class interface or
specification), you don't need to test or you can just perform black-box testing only.

However, function tests only try the "normal" paths or states of the class. In order to test the other paths or
states, we need code instrumentation. Also it is often difficult to exercise exception and error handling
without examination of the source code (Binder, 1995).

12
Back to Top

9. CONCLUSION
In conclusion, it is my intent to overstate the obvious: proper testing methods must be used to conduct
"true" testing. What constitutes a proper method is driven by the environment, the situation and, most
importantly, by the objectives. As a general rule, no one method alone is sufficient. A combination of
methods and techniques is usually necessary to develop a good set of test cases against which the software
can be evaluated. This holds true under both the traditional (procedureal) and object oriented models.

Back to Top

10. REFERENCES

Pressman, R.S. (1997). Software Engineering, a practitioner's approach. U.S.A., McGraw


Hill.

Humphrey, W.S. (1989). Managing the software process. U.S.A., Addison-Wesley.

Marick, B. (1995a). Testing Foundations, part 1.


http://www.stlabs.com/MARICK/1-fault.htm.

Marick, B. (1995b). Testing Foundations, part 2.


http://www.stlabs.com/MARICK/2-scen.htm.

Binder, R.V. (1994). Testing Object-Oriented Systems: A Status Report.


http://www.rbsc.com/pages/ootstat.html.

Binder, R.V. (1995). Object-Oriented Testing: Myth and Reality.


http://www.rbsc.com/pages/myths.html.

13

Das könnte Ihnen auch gefallen