Sie sind auf Seite 1von 25

TESTING

Testing is a process used to help identify the correctness, completeness and quality of
developed computer software. With that in mind, testing can never completely establish the
correctness of computer software.

There are many approaches to software testing, but effective testing of complex products is
essentially a process of investigation, not merely a matter of creating and following rote
procedure. One definition of testing is "the process of questioning a product in order to
evaluate it", where the "questions" are things the tester tries to do with the product, and
the product answers with its behavior in reaction to the probing of the tester. Although most
of the intellectual processes of testing are nearly identical to that of review or inspection,
the word testing is connoted to mean the dynamic analysis of the product—putting the
product through its paces.

The quality of the application can and normally does vary widely from system to system but
some of the common quality attributes include reliability, stability, portability, maintainability
and usability. Refer to the ISO standard ISO 9126 for a more complete list of attributes and
criteria.

Testing helps is Verifying and Validating if the Software is working as it is intended to be


working. Thins involves using Static and Dynamic methodologies to Test the application.
Because of the fallibility of its human designers and its own abstract, complex nature,
software development must be accompanied by quality assurance activities. It is not
unusual for developers to spend 40% of the total project time on testing. For life-critical
software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as
all other activities combined. The destructive nature of testing requires that the developer
discard preconceived notions of the correctness of his/her developed software.

Software Testing Fundamentals


Testing objectives include

1. Testing is a process of executing a program with the intent of finding an error.


2. A good test case is one that has a high probability of finding an as yet undiscovered
error.
3. A successful test is one that uncovers an as yet undiscovered error.
Testing should systematically uncover different classes of errors in a minimum amount of
time and with a minimum amount of effort. A secondary benefit of testing is that it
demonstrates that the software appears to be working as stated in the specifications. The
data collected through testing can also provide an indication of the software's reliability and
quality. But, testing cannot show the absence of defect -- it can only show that software
defects are present.

When Testing should start:

Testing early in the life cycle reduces the errors. Test deliverables are associated with every
phase of development. The goal of Software Tester is to find bugs, find them as early as
possible, and make them sure they are fixed.

The number one cause of Software bugs is the Specification. There are several reasons
specifications are the largest bug producer.

In many instances a Spec simply isn’t written. Other reasons may be that the spec isn’t
thorough enough, it’s constantly changing, or it’s not communicated well to the entire team.
Planning software is vitally important. If it’s not done correctly bugs will be created.

The next largest source of bugs is the Design, That’s where the programmers lay the plan
for their Software. Compare it to an architect creating the blue print for the building, Bugs
occur here for the same reason they occur in the specification. It’s rushed, changed, or not
well communicated.

Coding errors may be more familiar to you if you are a programmer. Typically these can be
traced to the Software complexity, poor documentation, schedule pressure or just plain
dump mistakes. It’s important to note that many bugs that appear on the surface to be
programming errors can really be traced to specification. It’s quite common to hear a
programmer say, “ oh, so that’s what its supposed to do. If someone had told me that I
wouldn’t have written the code that way.”

The other category is the catch-all for what is left. Some bugs can blamed for false
positives, conditions that were thought to be bugs but really weren’t. There may be
duplicate bugs, multiple ones that resulted from the square root cause. Some bugs can be
traced to Testing errors.
Costs: The costs re logarithmic- that is, they increase tenfold as time increases. A bug
found and fixed during the early stages when the specification is being written might cost
next to nothing, or 10 cents in our example. The same bug, if not found until the software is
coded and tested, might cost $1 to $10. If a customer finds it, the cost would easily top
$100.

When to Stop Testing

This can be difficult to determine. Many modern software applications are so complex, and
run in such as interdependent environment, that complete testing can never be done.
"When to stop testing" is one of the most difficult questions to a test engineer. Common
factors in deciding when to stop are:
• Deadlines ( release deadlines,testing deadlines.)
• Test cases completed with certain percentages passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• The rate at which Bugs can be found is too small
• Beta or Alpha Testing period ends
• The risk in the project is under acceptable limit.

Practically, we feel that the decision of stopping testing is based on the level of the risk
acceptable to the management. As testing is a never ending process we can never assume
that 100 % testing has been done, we can only minimize the risk of shipping the product to
client with X testing done. The risk can be measured by Risk analysis but for small duration
/ low budget / low resources project, risk can be deduced by simply: -
• Measuring Test Coverage.
• Number of test cycles.
• Number of high priority bugs.

Test Strategy:

How we plan to cover the product so as to develop an adequate assessment of quality.

A good test strategy is:


1.Specific
2.Practical
3.Justified

The purpose of a test strategy is to clarify the major tasks and challenges of the test
project. Test Approach and Test Architecture are other terms commonly used to describe
what I’m calling test strategy. Example of a poorly stated (and probably poorly conceived)
test strategy:

"We will use black box testing, cause-effect graphing, boundary testing, and white box
testing to test this product against its specification."

Test Strategy:

Type of Project, Type of Software, when Testing will occur, Critical Success factors, Tradeoffs

Test Plan - Why

• Identify Risks and Assumptions up front to reduce surprises later.


• Communicate objectives to all team members.
• Foundation for Test Spec, Test Cases, and ultimately the Bugs we find.
Failing to plan = planning to fail.

Test Plan - What

• Derived from Test Approach, Requirements, Project Plan, Functional Spec., and
Design Spec.
• Details out project-specific Test Approach.
• Lists general (high level) Test Case areas.
• Include testing Risk Assessment.
• Include preliminary Test Schedule
• Lists Resource requirements.

Test Plan

The test strategy identifies multiple test levels, which are going to be performed for the
project. Activities at each level must be planned well in advance and it has to be formally
documented. Based on the individual plans only, the individual test levels are carried out.
Entry means the entry point to that phase. For example, for unit testing, the coding must be
complete and then only one can start unit testing. Task is the activity that is performed.
Validation is the way in which the progress and correctness and compliance are verified for
that phase. Exit tells the completion criteria of that phase, after the validation is done. For
example, the exit criterion for unit testing is all unit test cases must pass.

Unit Test Plan {UTP}

The unit test plan is the overall plan to carry out the unit test activities. The lead tester
prepares it and it will be distributed to the individual testers, which contains the following
sections.

What is to be tested?

The unit test plan must clearly specify the scope of unit testing. In this, normally the basic
input/output of the units along with their basic functionality will be tested. In this case
mostly the input units will be tested for the format, alignment, accuracy and the totals. The
UTP will clearly give the rules of what data types are present in the system, their format and
their boundary conditions. This list may not be exhaustive; but it is better to have a
complete list of these details.

Sequence of Testing

The sequences of test activities that are to be carried out in this phase are to be listed in
this section. This includes, whether to execute positive test cases first or negative test cases
first, to execute test cases based on the priority, to execute test cases based on test groups
etc. Positive test cases prove that the system performs what is supposed to do; negative
test cases prove that the system does not perform what is not supposed to do. Testing the
screens, files, database etc., are to be given in proper sequence.

Basic Functionality of Units

How the independent functionalities of the units are tested which excludes any
communication between the unit and other units. The interface part is out of scope of this
test level. Apart from the above sections, the following sections are addressed, very specific
to unit testing.
• Unit Testing Tools
• Priority of Program units
• Naming convention for test cases
• Status reporting mechanism
• Regression test approach
• ETVX criteria

Integration Test Plan

The integration test plan is the overall plan for carrying out the activities in the integration
test level, which contains the following sections.

What is to be tested?

This section clearly specifies the kinds of interfaces fall under the scope of testing internal,
external interfaces, with request and response is to be explained. This need not go deep in
terms of technical details but the general approach how the interfaces are triggered is
explained.

Sequence of Integration

When there are multiple modules present in an application, the sequence in which they are
to be integrated will be specified in this section. In this, the dependencies between the
modules play a vital role. If a unit B has to be executed, it may need the data that is fed by
unit A and unit X. In this case, the units A and X have to be integrated and then using that
data, the unit B has to be tested. This has to be stated to the whole set of units in the
program. Given this correctly, the testing activities will lead to the product, slowly building
the product, unit by unit and then integrating them.

System Test Plan {STP}

The system test plan is the overall plan carrying out the system test level activities. In the
system test, apart from testing the functional aspects of the system, there are some special
testing activities carried out, such as stress testing etc. The following are the sections
normally present in system test plan.
What is to be tested?

This section defines the scope of system testing, very specific to the project. Normally, the
system testing is based on the requirements. All requirements are to be verified in the
scope of system testing. This covers the functionality of the product. Apart from this what
special testing is performed are also stated here.

Functional Groups and the Sequence

The requirements can be grouped in terms of the functionality. Based on this, there may be
priorities also among the functional groups. For example, in a banking application, anything
related to customer accounts can be grouped into one area, anything related to inter-branch
transactions may be grouped into one area etc. Same way for the product being tested,
these areas are to be mentioned here and the suggested sequences of testing of these
areas, based on the priorities are to be described.

Acceptance Test Plan {ATP}

The client at their place performs the acceptance testing. It will be very similar to the
system test performed by the Software Development Unit. Since the client is the one who
decides the format and testing methods as part of acceptance testing, there is no specific
clue on the way they will carry out the testing. But it will not differ much from the system
testing. Assume that all the rules, which are applicable to system test, can be implemented
to acceptance testing also.

Since this is just one level of testing done by the client for the overall product, it may
include test cases including the unit and integration test level details.
A sample Test Plan Outline along with their description is as shown below:

Test Plan Outline

1. BACKGROUND – This item summarizes the functions of the application system and the
tests to be performed.
2. INTRODUCTION
3. ASSUMPTIONS – Indicates any anticipated assumptions which will be made while
testing the application.
4. TEST ITEMS - List each of the items (programs) to be tested.
5. FEATURES TO BE TESTED - List each of the features (functions or requirements) which
will be tested or demonstrated by the test.
6. FEATURES NOT TO BE TESTED - Explicitly lists each feature, function, or requirement
which won't be tested and why not.
7. APPROACH - Describe the data flows and test philosophy.
Simulation or Live execution, Etc. This section also mentions all the approaches which
will be followed at the various stages of the test execution.
8. ITEM PASS/FAIL CRITERIA Blanket statement - Itemized list of expected output and
tolerances
9. SUSPENSION/RESUMPTION CRITERIA - Must the test run from start to completion?
Under what circumstances it may be resumed in the middle?
Establish check-points in long tests.
10. TEST DELIVERABLES - What, besides software, will be delivered?
Test report,Test software
11. TESTING TASKS Functional tasks (e.g., equipment set up)
Administrative tasks
12. ENVIRONMENTAL NEEDS
Security clearance ,Office space & equipment, Hardware/software requirements
13. RESPONSIBILITIES
Who does the tasks in Section 10?
What does the user do?
14. STAFFING & TRAINING
15. SCHEDULE
16. RESOURCES
17. RISKS & CONTINGENCIES
18. APPROVALS

Risk Analysis:

A risk is a potential for loss or damage to an Organization from materialized threats. Risk
Analysis attempts to identify all the risks and then quantify the severity of the risks.A threat
as we have seen is a possible damaging event. If it occurs, it exploits vulnerability in the
security of a computer based system.

Risk Identification:

1. Software Risks: Knowledge of the most common risks associated with Software
development, and the platform you are working on.
2. Business Risks: Most common risks associated with the business using the Software
3. Testing Risks: Knowledge of the most common risks associated with Software Testing
for the platform you are working on, tools being used, and test methods being applied.
4. Premature Release Risk: Ability to determine the risk associated with releasing
unsatisfactory or untested Software Products.
5. Risk Methods: Strategies and approaches for identifying risks or problems associated
with implementing and operating information technology, products and process;
assessing their likelihood, and initiating strategies to test those risks.

Traceability means that you would like to be able to trace back and forth how and where
any workproduct fulfills the directions of the preceding (source-) product. The matrix deals
with the where, while the how you have to do yourself, once you know the where.

Take e.g. the Requirement of UserFriendliness (UF). Since UF is a complex concept, it is not
solved by just one design-solution and it is not solved by one line of code. Many partial
design-solutions may contribute to this Requirement and many groups of lines of code may
contribute to it.

A Requirements-Design Traceability Matrix puts on one side (e.g. left) the sub-
requirements that together are supposed to solve the UF requirement, along with other
(sub-)requirements. On the other side (e.g. top) you specify all design solutions. Now you
can connect on the crosspoints of the matrix, which design solutions solve (more, or less)
any requirement. If a design solution does not solve any requirement, it should be deleted,
as it is of no value.

Having this matrix, you can check whether any requirement has at least one design solution
and by checking the solution(s) you may see whether the requirement is sufficiently solved
by this (or the set of) connected design(s).

If you have to change any requirement, you can see which designs are affected. And if you
change any design, you can check which requirements may be affected and see what the
impact is.

In a Design-Code Traceability Matrix you can do the same to keep trace of how and which
code solves a particular design and how changes in design or code affect each other.
Demonstrates that the implemented system meets the user requirements.

1. Serves as a single source for tracking purposes.


2. Identifies gaps in the design and testing.
3. Prevents delays in the project timeline, which can be brought about by having to
backtrack to fill the gaps.

Software Testing Life Cycle:

The test development life cycle contains the following components:

1. Requirements
2. Use Case Document
3. Test Plan
4. Test Case
5. Test Case execution
6. Report Analysis
7. Bug Analysis
8. Bug Reporting

Typical interaction scenario from a user's perspective for system requirements studies or
testing. In other words, "an actual or realistic example scenario". A use case describes the
use of a system from start to finish. Use cases focus attention on aspects of a system useful
to people outside of the system itself.
• Users of a program are called users or clients.
• Users of an enterprise are called customers, suppliers, etc.

Use Case:

A collection of possible scenarios between the system under discussion and external actors,
characterized by the goal the primary actor has toward the system's declared
responsibilities, showing how the primary actor's goal might be delivered or might fail.

Use cases are goals (use cases and goals are used interchangeably) that are made up of
scenarios. Scenarios consist of a sequence of steps to achieve the goal, each step in a
scenario is a sub (or mini) goal of the use case. As such each sub goal represents either
another use case (subordinate use case) or an autonomous action that is at the lowest level
desired by our use case decomposition.

This hierarchical relationship is needed to properly model the requirements of a system


being developed. A complete use case analysis requires several levels. In addition the level
at which the use case is operating at it is important to understand the scope it is
addressing. The level and scope are important to assure that the language and granularity
of scenario steps remain consistent within the use case.

There are two scopes that use cases are written from: Strategic and System.

There are also three levels: Summary, User and Sub-function.

Scopes: Strategic and System

Strategic Scope:

The goal (Use Case) is a strategic goal with respect to the system. These goals are goals of
value to the organization. The use case shows how the system is used to benefit the
organization.,/p> These strategic use cases will eventually use some of the same lower level
(subordinate) use cases.
System Scope:

Use cases at system scope are bounded by the system under development. The goals
represent specific functionality required of the system. The majority of the use cases are at
system scope. These use cases are often steps in strategic level use cases

Levels: Summary Goal , User Goal and Sub-function.

Sub-function Level Use Case:

A sub goal or step is below the main level of interest to the user. Examples are "logging in"
and "locate a device in a DB". Always at System Scope.

User Level Use Case:

This is the level of greatest interest. It represents a user task or elementary business
process. A user level goal addresses the question "Does your job performance depend on
how many of these you do in a day". For example "Create Site View" or "Create New
Device" would be user level goals but "Log In to System" would not. Always at System
Scope.

Summary Level Use Case:

Written for either strategic or system scope. They represent collections of User Level Goals.
For example summary goal "Configure Data Base" might include as a step, user level goal
"Add Device to database". Either at System of Strategic Scope.

Test Documentation

Test documentation is a required tool for managing and maintaining the testing process.
Documents produced by testers should answer the following questions:
• What to test? Test Plan
• How to test? Test Specification
• What are the results? Test Results Analysis Report

1. Why test - what is Testing?


Testing is a process used to help identify the correctness, completeness and quality
of developed computer software

2. System Testing myths and legends - What are they?

Myth1: There is no need to test


Myth2: If testing must be done; two weeks at the end of the project is sufficient for
testing
Myth3: Re-testing is not necessary
Myth4: Any fool can test
Myth5: The last thing you want is users involved in test
Myth6: The V-model is too complicated

3. What are the Concepts for Application Test Management?

Testing should be pro-active following the V-model Test execution can be a manual
process Test execution can be an automated process It is possible to plan the start
date for testing It is not possible to accurately plan the end date of testing Ending
testing is through risk assessment A fool with a tool is still a fool Testing is not a
diagnosis process Testing is a triage process Testing is expensive Not testing, can be
more expensive

4. Test Analysts - What is their Value Add?

• Understand the system under test


• Document Assumptions
• Create and execute repeatable tests
• Value add through negative testing
• Contribute to Impact Analysis when assessing Changes
• Contribute to the risk assessment when considering to end testing

5. What is involved in the Application Test Lifecycle?


Unit testing Module testing Component testing Component integration testing
Subsystem testing System testing Functional testing Technical integration testing
System integration testing Non-functional testing Integration testing Regression
testing Model Office testing User Acceptance testing

6. How to manage Risk Mitigation?

Identify risks before the adversity affects the project Analyse risk data for
interpretation by the project team Plan actions for probability, magnitude &
consequences Track risks and actions, maintaining a risk register Control risk action
plan, correct plan deviations.

7. What should the Test Team do?

Programme Management Strong Change Management Strict Configuration Control


Pro Active Scope Creep Management Inclusion in the decision making process.

8. What are the Test Team Deliverables?

• Test Plans
• Test Script Planner
• Test Scripts
• Test Execution Results
• Defect Reports

STATIC TESTING

The Verification activities fall into the category of Static Testing. During static testing, you
have a checklist to check whether the work you are doing is going as per the set standards
of the organization. These standards can be for Coding, Integrating and Deployment.
Review's, Inspection's and Walkthroughs are static testing methodologies.
DYNAMIC TESTING

Dynamic Testing involves working with the software, giving input values and checking if the
output is as expected. These are the Validation activities. Unit Tests, Integration Tests,
System Tests and Acceptance Tests are few of the Dynamic Testing methodologies.

BLACK BOX TESTING

Introduction

Black box testing attempts to derive sets of inputs that will fully exercise all the functional
requirements of a system. It is not an alternative to white box testing. This type of testing
attempts to find errors in the following categories:

1. Incorrect or missing functions,


2. interface errors,
3. errors in data structures or external database access,
4. performance errors, and 5. initialization and termination errors.

Tests are designed to answer the following questions:

1. How is the function's validity tested?


2. What classes of input will make good test cases?
3. Is the system particularly sensitive to certain input values?
4. How are the boundaries of a data class isolated?
5. What data rates and data volume can the system tolerate?
6. What effect will specific combinations of data have on system operation?

White box testing should be performed early in the testing process, while black box testing
tends to be applied during later stages. Test cases should be derived which

1. reduce the number of additional test cases that must be designed to achieve reasonable
testing, and
2. tell us something about the presence or absence of classes of errors, rather than an error
associated only with the specific test at hand.

Equivalence Partitioning

This method divides the input domain of a program into classes of data from which test
cases can be derived. Equivalence partitioning strives to define a test case that uncovers
classes of errors and thereby reduces the number of test cases needed. It is based on an
evaluation of equivalence classes for an input condition. An equivalence class represents a
set of valid or invalid states for input conditions.

Equivalence classes may be defined according to the following guidelines:

1. If an input condition specifies a range, one valid and two invalid equivalence classes are
defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence
classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid
equivalence class are defined.
4. If an input condition is boolean, then one valid and one invalid equivalence class are
defined.

Boundary Value Analysis

This method leads to a selection of test cases that exercise boundary values. It
complements equivalence partitioning since it selects test cases at the edges of a class.
Rather than focusing on input conditions solely, BVA derives test cases from the output
domain also. BVA guidelines include:

1. For input ranges bounded by a and b, test cases should include values a and b
and just above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be developed
to exercise the minimum and maximum numbers and values just above and
below these limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be
designed to exercise the data structure at its boundary.

Cause-Effect Graphing Techniques

Cause-effect graphing is a technique that provides a concise representation of logical


conditions and corresponding actions. There are four steps:

1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is
assigned to each.
2. A cause-effect graph is developed.
3. The graph is converted to a decision table.
4. Decision table rules are converted to test cases.

WHITE BOX TESTING

White box testing is a test case design method that uses the control structure of the
procedural design to derive test cases. Test cases can be derived that

1. Guarantee that all independent paths within a module have been exercised at
least once,
2. Exercise all logical decisions on their true and false sides,
3. Execute all loops at their boundaries and within their operational bounds, and
4. Exercise internal data structures to ensure their validity.

The Nature of Software Defects

Logic errors and incorrect assumptions are inversely proportional to the probability that a
program path will be executed. General processing tends to be well understood while special
case processing tends to be prone to errors.
We often believe that a logical path is not likely to be executed when it may be executed on
a regular basis. Our unconscious assumptions about control flow and data lead to design
errors that can only be detected by path testing.

Typographical errors are random.

Basis Path Testing

This method enables the designer to derive a logical complexity measure of a procedural
design and use it as a guide for defining a basis set of execution paths. Test cases that
exercise the basis set are guaranteed to execute every statement in the program at least
once during testing.

Flow Graphs

Flow graphs can be used to represent control flow in a program and can help in the
derivation of the basis set. Each flow graph node represents one or more procedural
statements. The edges between nodes represent flow of control. An edge must terminate at
a node, even if the node does not represent any useful procedural statements. A region in a
flow graph is an area bounded by edges and nodes. Each node that contains a condition is
called a predicate node.

Cyclomatic complexity is a metric that provides a quantitative measure of the logical


complexity of a program. It defines the number of independent paths in the basis set and
thus provides an upper bound for the number of tests that must be performed.

The Basis Set

An independent path is any path through a program that introduces at least one new set of
processing statements (must move along at least one new edge in the path). The basis set
is not unique. Any number of different basis sets can be derived for a given procedural
design. Cyclomatic complexity, V(G), for a flow graph G is equal to
1. The number of regions in the flow graph.
2. V(G) = E - N + 2 where E is the number of edges and N is the number of nodes.
3. V(G) = P + 1 where P is the number of predicate nodes.

Deriving Test Cases

1. From the design or source code, derive a flow graph.

2. Determine the cyclomatic complexity of this flow graph.


Even without a flow graph, V(G) can be determined by counting
the number of conditional statements in the code.

3. Determine a basis set of linearly independent paths.


Predicate nodes are useful for determining the necessary paths.

4. Prepare test cases that will force execution of each path in the basis set.
Each test case is executed and compared to the expected results.

Automating Basis Set Derivation

The derivation of the flow graph and the set of basis paths is amenable to automation. A
software tool to do this can be developed using a data structure called a graph matrix. A
graph matrix is a square matrix whose size is equivalent to the number of nodes in the flow
graph. Each row and column correspond to a particular node and the matrix corresponds to
the connections (edges) between nodes. By adding a link weight to each matrix entry, more
information about the control flow can be captured. In its simplest form, the link weight is 1
if an edge exists and 0 if it does not. But other types of link weights can be represented:

� the probability that an edge will be executed,


� the processing time expended during link traversal,
� the memory required during link traversal, or
� the resources required during link traversal.

Graph theory algorithms can be applied to these graph matrices to help in the analysis
necessary to produce the basis set.

Loop Testing
This white box technique focuses exclusively on the validity of loop constructs. Four different
classes of loops can be defined:

1. simple loops,
2. nested loops,
3. concatenated loops, and
4. unstructured loops.

Simple Loops
The following tests should be applied to simple loops where n is the maximum number of
allowable passes through the loop:

1. skip the loop entirely,


2. only pass once through the loop,
3. m passes through the loop where m < n,
4. n - 1, n, n + 1 passes through the loop.

Nested Loops

The testing of nested loops cannot simply extend the technique of simple loops since this
would result in a geometrically increasing number of test cases. One approach for nested
loops:

1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
Minimums. Add tests for out-of-range or excluded values.
3. Work outward, conducting tests for the next loop while keeping all other outer loops at
minimums and other nested loops to typical values.
4. Continue until all loops have been tested.

Concatenated Loops

Concatenated loops can be tested as simple loops if each loop is independent of the others.
If they are not independent (e.g. the loop counter for one is the loop counter for the other),
then the nested approach can be used.
Unstructured Loops
This type of loop should be redesigned not tested
Other white box testing techniques include:
Condition testing
exercises the logical conditions in a program.
Data flow testing
selects test paths according to the locations of definitions and uses of variables in the
program.

UNIT TESTING

In computer programming, a unit test is a method of testing the correctness of a particular


module of source code.

The idea is to write test cases for every non-trivial function or method in the module so that
each test case is separate from the others if possible. This type of testing is mostly done by
the developers.

Benefits

The goal of unit testing is to isolate each part of the program and show that the individual
parts are correct. It provides a written contract that the piece must satisfy. This isolated
testing provides four main benefits:

Encourages change

Unit testing allows the programmer to refactor code at a later date, and make sure the
module still works correctly (regression testing). This provides the benefit of encouraging
programmers to make changes to the code since it is easy for the programmer to check if
the piece is still working properly.

Simplifies Integration

Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a
bottom-up testing style approach. By testing the parts of a program first and then testing
the sum of its parts will make integration testing easier.
Documents the code

Unit testing provides a sort of "living document" for the class being tested. Clients looking to
learn how to use the class can look at the unit tests to determine how to use the class to fit
their needs.

Separation of Interface from Implementation

Because some classes may have references to other classes, testing a class can frequently
spill over into testing another class. A common example of this is classes that depend on a
database; in order to test the class, the tester finds himself writing code that interacts with
the database. This is a mistake, because a unit test should never go outside of its own class
boundary. As a result, the software developer abstracts an interface around the database
connection, and then implements that interface with their own Mock Object. This results in
loosely coupled code, thus minimizing dependencies in the system.

Limitations

It is important to realize that unit-testing will not catch every error in the program. By
definition, it only tests the functionality of the units themselves. Therefore, it will not catch
integration errors, performance problems and any other system-wide issues. In addition, it
may not be trivial to anticipate all special cases of input the program unit under study may
receive in reality. Unit testing is only effective if it is used in conjunction with other software
testing activities.
Requirements Testing
Usage:
To ensure that system performs correctly
• To ensure that correctness can be sustained for a considerable period of time.
• System can be tested for correctness through all phases of SDLC but incase of
reliability the programs should be in place to make system operational.
Objective:
Successfully implementation of user requirements,/li>
• Correctness maintained over considerable period of time Processing of the application
complies with the organization’s policies and procedures.
Secondary users needs are fulfilled:
• Security officer
• DBA
• Internal auditors
• Record retention
• Comptroller

How to Use
Test conditions created
• These test conditions are generalized ones, which becomes test cases as the SDLC
progresses until system is fully operational.
• Test conditions are more effective when created from user’s requirements.
• Test conditions if created from documents then if there are any error in the
documents those will get incorporated in Test conditions and testing would not be
able to find those errors.
• Test conditions if created from other sources (other than documents) error trapping
is effective.
• Functional Checklist created.

When to Use
Every application should be Requirement tested
• Should start at Requirements phase and should progress till operations and
maintenance phase.
• The method used to carry requirement testing and the extent of it is important.
Example
Creating test matrix to prove that system requirements as documented are the
requirements desired by the user.
• Creating checklist to verify that application complies to the organizational policies
and procedures.

Regression Testing
Usage:
All aspects of system remain functional after testing.
• Change in one segment does not change the functionality of other segment.
Objective:
Determine System documents remain current
• Determine System test data and test conditions remain current
• Determine Previously tested system functions properly without getting effected
though changes are made in some other segment of application system.
How to Use
Test cases, which were used previously for the already tested segment is, re-run to ensure
that the results of the segment tested currently and the results of same segment tested
earlier are same.
• Test automation is needed to carry out the test transactions (test condition
execution) else the process is very time consuming and tedious.

• In this case of testing cost/benefit should be carefully evaluated else the efforts
spend on testing would be more and payback would be minimum.
When to Use
When there is high risk that the new changes may effect the unchanged areas of application
system.
• In development process: Regression testing should be carried out after the pre-
determined changes are incorporated in the application system.
• In Maintenance phase : regression testing should be carried out if there is a high risk
that loss may occur when the changes are made to the system
Example
Re-running of previously conducted tests to ensure that the unchanged portion of system
functions properly.
• Reviewing previously prepared system documents (manuals) to ensure that they do
not get effected after changes are made to the application system.
Disadvantage
Time consuming and tedious if test automation not done

Error Handling Testing


Usage:
It determines the ability of applications system to process the incorrect transactions
properly
• Errors encompass all unexpected conditions.
• In some system approx. 50% of programming effort will be devoted to handling error
condition.
Objective:
Determine Application system recognizes all expected error conditions
• Determine Accountability of processing errors has been assigned and procedures
provide a high probability that errors will be properly corrected
• Determine During correction process reasonable control is maintained over errors.
How to Use
A group of knowledgeable people is required to anticipate what can go wrong in the
application system.
• It is needed that all the application knowledgeable people assemble to integrate their
knowledge of user area, auditing and error tracking.
• Then logical test error conditions should be created based on this assimilated
information.

When to Use
Throughout SDLC.
• Impact from errors should be identified and should be corrected to reduce the errors
to acceptable level.
• Used to assist in error management process of system development and
maintenance.
Example
Create a set of erroneous transactions and enter them into the application system then
find out whether the system is able to identify the problems..
• Using iterative testing enters transactions and trap errors. Correct them. Then enter
transactions with errors, which were not present in the system earlier.

Das könnte Ihnen auch gefallen