Sie sind auf Seite 1von 10

* Black-box testing

The black-box approach is a testing method in which test data are derived from t
he specified functional requirements without regard to the final program structu
re. [Perry90] It is also termed data-driven, input/output driven [Myers79], or r
equirements-based [Hetzel88] testing. Because only the functionality of the soft
ware module is of concern, black-box testing also mainly refers to functional te
sting -- a testing method emphasized on executing the functions and examination
of their input and output data. [Howden87] The tester treats the software under
test as a black box -- only the inputs, outputs and specification are visible, a
nd the functionality is determined by observing the outputs to corresponding inp
uts. In testing, various inputs are exercised and the outputs are compared again
st specification to validate the correctness. All test cases are derived from th
e specification. No implementation details of the code are considered.
It is obvious that the more we have covered in the input space, the more problem
s we will find and therefore we will be more confident about the quality of the
software. Ideally we would be tempted to exhaustively test the input space. But
as stated above, exhaustively testing the combinations of valid inputs will be i
mpossible for most of the programs, let alone considering invalid inputs, timing
, sequence, and resource variables. Combinatorial explosion is the major roadblo
ck in functional testing. To make things worse, we can never be sure whether the
specification is either correct or complete. Due to limitations of the language
used in the specifications (usually natural language), ambiguity is often inevi
table. Even if we use some type of formal or restricted language, we may still f
ail to write down all the possible cases in the specification. Sometimes, the sp
ecification itself becomes an intractable problem: it is not possible to specify
precisely every situation that can be encountered using limited words. And peop
le can seldom specify clearly what they want -- they usually can tell whether a
prototype is, or is not, what they want after they have been finished. Specifica
tion problems contributes approximately 30 percent of all bugs in software. [Bei
zer95]
The research in black-box testing mainly focuses on how to maximize the effectiv
eness of testing with minimum cost, usually the number of test cases. It is not
possible to exhaust the input space, but it is possible to exhaustively test a s
ubset of the input space. Partitioning is one of the common techniques. If we ha
ve partitioned the input space and assume all the input values in a partition is
equivalent, then we only need to test one representative value in each partitio
n to sufficiently cover the whole input space. Domain testing [Beizer95] partiti
ons the input domain into regions, and consider the input values in each domain
an equivalent class. Domains can be exhaustively tested and covered by selecting
a representative value(s) in each domain. Boundary values are of special intere
st. Experience shows that test cases that explore boundary conditions have a hig
her payoff than test cases that do not. Boundary value analysis [Myers79] requir
es one or more boundary values selected as representative test cases. The diffic
ulties with domain testing are that incorrect domain definitions in the specific
ation can not be efficiently discovered.
Good partitioning requires knowledge of the software structure. A good testing p
lan will not only contain black-box testing, but also white-box approaches, and
combinations of the two.
* White-box testing
Contrary to black-box testing, software is viewed as a white-box, or glass-box i
n white-box testing, as the structure and flow of the software under test are vi
sible to the tester. Testing plans are made according to the details of the soft
ware implementation, such as programming language, logic, and styles. Test cases
are derived from the program structure. White-box testing is also called glass-
box testing, logic-driven testing [Myers79] or design-based testing [Hetzel88].
There are many techniques available in white-box testing, because the problem of
intractability is eased by specific knowledge and attention on the structure of
the software under test. The intention of exhausting some aspect of the softwar
e is still strong in white-box testing, and some degree of exhaustion can be ach
ieved, such as executing each line of code at least once (statement coverage), t
raverse every branch statements (branch coverage), or cover all the possible com
binations of true and false condition predicates (Multiple condition coverage).
[Parrington89]
Control-flow testing, loop testing, and data-flow testing, all maps the correspo
nding flow structure of the software into a directed graph. Test cases are caref
ully selected based on the criterion that all the nodes or paths are covered or
traversed at least once. By doing so we may discover unnecessary "dead" code --
code that is of no use, or never get executed at all, which can not be discovere
d by functional testing.
In mutation testing, the original program code is perturbed and many mutated pro
grams are created, each contains one fault. Each faulty version of the program i
s called a mutant. Test data are selected based on the effectiveness of failing
the mutants. The more mutants a test case can kill, the better the test case is
considered. The problem with mutation testing is that it is too computationally
expensive to use. The boundary between black-box approach and white-box approach
is not clear-cut. Many testing strategies mentioned above, may not be safely cl
assified into black-box testing or white-box testing. It is also true for transa
ction-flow testing, syntax testing, finite-state testing, and many other testing
strategies not discussed in this text. One reason is that all the above techniq
ues will need some knowledge of the specification of the software under test. An
other reason is that the idea of specification itself is broad -- it may contain
any requirement including the structure, programming language, and programming
style as part of the specification content.
We may be reluctant to consider random testing as a testing technique. The test
case selection is simple and straightforward: they are randomly chosen. Study in
[Duran84] indicates that random testing is more cost effective for many program
s. Some very subtle errors can be discovered with low cost. And it is also not i
nferior in coverage than other carefully designed testing techniques. One can al
so obtain reliability estimate using random testing results based on operational
profiles. Effectively combining random testing with other testing techniques ma
y yield more powerful and cost-effective testing strategies.

BBT:
Functional Testing:
In this type of testing, the software is tested for the functional requirem
ents. The tests are written in order to check if the application behaves as expe
cted. Although functional testing is often done toward the end of the developmen
t cycle, it can and should, be started much earlier. Individual components and proc
esses can be tested early on, even before it's possible to do functional testing
on the entire system.
Functional testing covers how well the system executes the functions it is
supposed to execute including user commands, data manipulation, searches and busin
ess processes, user screens, and integrations. Functional testing covers the obv
ious surface type of functions, as well as the back-end operations (such as secu
rity and how upgrades affect the system).
STRESS TESTING :
The application is tested against heavy load such as complex numerical va
lues, large number of inputs, large number of queries etc. which checks for the
stress/load the applications can withstand. Stress testing deals with the qualit
y of the application in the environment.
The idea is to create an environment more demanding of the application th
an the application would experience under normal work loads. This is the hardest
and most complex category of testing to accomplish and it requires a joint effo
rt from all teams. A test environment is established with many testing stations.
At each station, a script is exercising the system. These scripts are usually b
ased on the regression suite. More and more stations are added, all simultaneous
hammering on the system, until the system breaks. The system is repaired and th
e stress test is repeated until a level of stress is reached that is higher than
expected to be present at a customer site.

BLACK BOX TESTING


STRESS TESTING

The application is tested against heavy load such as complex numerical values, l
arge number of inputs, large number of queries etc. which checks for the stress/
load the applications can withstand. Stress testing deals with the quality of th
e application in the environment.
The idea is to create an environment more demanding of the application than the
application would experience under normal work loads. This is the hardest and mo
st complex category of testing to accomplish and it requires a joint effort from
all teams. A test environment is established with many testing stations. At eac
h station, a script is exercising the system. These scripts are usually based on
the regression suite. More and more stations are added, all simultaneous hammer
ing on the system, until the system breaks. The system is repaired and the stres
s test is repeated until a level of stress is reached that is higher than expect
ed to be present at a customer site.
Race conditions and memory leaks are often found under stress testing. A race co
ndition is a conflict between at least two tests. Each test works correctly when
done in isolation. When the two tests are run in parallel, one or both of the t
ests fail. This is usually due to an incorrectly managed lock. A memory leak hap
pens when a test leaves allocated memory behind and does not correctly return th
e memory to the memory allocation scheme. The test seems to run correctly, but a
fter being exercised several times, available memory is reduced until the system
fails.
LOAD TESTING:
The application is tested against heavy loads or inputs such as testing of
web sites in order to find out at what point the web-site/application fails or a
t what point its performance degrades. Load testing operates at a predefined loa
d level, usually the highest load that the system can accept while still functio
ning properly.

BLACK BOX TESTING


LOAD TESTING
The application is tested against heavy loads or inputs such as testing of web s
ites in order to find out at what point the web-site/application fails or at wha
t point its performance degrades. Load testing operates at a predefined load lev
el, usually the highest load that the system can accept while still functioning
properly.
Note that load testing does not aim to break the system by overwhelming it, but
instead tries to keep the system constantly humming like a well-oiled machine.In
the context of load testing, extreme importance should be given of having large
datasets available for testing. Bugs simply do not surface unless you deal with
very large entities such thousands of users in repositories such as LDAP/NIS/Ac
tive Directory; thousands of mail server mailboxes, multi-gigabyte tables in dat
abases, deep file/directory hierarchies on file systems, etc. Testers obviously
need automated tools to generate these large data sets, but fortunately any good
scripting language worth its salt will do the job.

AD_HOC TESTING:
This type of testing is done without any formal Test Plan or Test Case cr
eation. Ad-hoc testing helps in deciding the scope and duration of the various o
ther testing and it also helps testers in learning the application prior startin
g with any other testing. It is the least formal method of testing.
One of the best uses of ad hoc testing is for discovery. Reading the requi
rements or specifications (if they exist) rarely gives you a good sense of how a
program actually behaves. Even the user documentation may not capture the look a
nd feel of a program. Ad hoc testing can find holes in your test strategy, and ca
n expose relationships between subsystems that would otherwise not be apparent.
In this way, it serves as a tool for checking the completeness of your testing.
Missing cases can be found and added to your testing arsenal. Finding new tests
in this way can also be a sign that you should perform root cause analysis.
Ask yourself or your test team, What other tests of this class should we be
running? Defects found while doing ad hoc testing are often examples of entire cl
asses of forgotten test cases. Another use for ad hoc testing is to determine th
e priorities for your other testing activities. In our example program, Panorama
may allow the user to sort photographs that are being displayed. If ad hoc test
ing shows this to work well, the formal testing of this feature might be deferre
d until the problematic areas are completed. On the other hand, if ad hoc testin
g of this sorting photograph feature uncovers problems, then the formal testing
might receive a higher priority.

EXPLORATORY TESTING
This testing is similar to the ad-hoc testing and is done in order to l
earn/explore the application.
Exploratory software testing is a powerful and fun approach to testing.
In some situations, it can be orders of magnitude more productive than scripted
testing. At least unconsciously, testers perform exploratory testing at one tim
e or another. Yet it doesn't get much respect in our field. It can be considered
as Scientific Thinking at real time

USABILITY TESTING
This testing is also called as Testing for User-Friendliness . This test
ing is done if User Interface of the application stands an important considerati
on and needs to be specific for the specific type of user. Usability testing is
the process of working with end-users directly and indirectly to assess how the
user perceives a software package and how they interact with it. This process w
ill uncover areas of difficulty for users as well as areas of strength. The goa
l of usability testing should be to limit and remove difficulties for users and
to leverage areas of strength for maximum usability. This testing should ideally
involve direct user feedback, indirect feedback (observed behavior), and when p
ossible computer supported feedback. Computer supported feedback is often (if n
ot always) left out of this process. Computer supported feedback can be as simpl
e as a timer on a dialog to monitor how long it takes users to use the dialog an
d counters to determine how often certain conditions occur (ie. error messages,
help messages, etc). Often, this involves trivial modifications to existing sof
tware, but can result in tremendous return on investment. Ultimately, usability
testing should result in changes to the delivered product in line with the disco
veries made regarding usability. These changes should be directly related to re
al-world usability by average users. As much as possible, documentation should
be written supporting changes so that in the future, similar situations can be h
andled with ease.

SMOKE TESTING:
This type of testing is also called sanity testing and is done in order
to check if the application is ready for further major testing and is working p
roperly without failing up to least expected level. A test of new or repaired eq
uipment by turning it on. If it smokes... guess what... it doesn't work! The ter
m also refers to testing the basic functions of software. The term was originall
y coined in the manufacture of containers and pipes, where smoke was introduced
to determine if there were any leaks.
A common practice at Microsoft and some other shrink-wrap software compa
nies is the "daily build and smoke test" process. Every file is compiled, linked
, and combined into an executable program every day, and the program is then put
through a "smoke test," a relatively simple check to see whether the product "s
mokes" when it runs.

RECOVERY TESTING
Recovery testing is basically done in order to check how fast and bette
r the application can recover against any type of crash or hardware failure etc.
Type or extent of recovery is specified in the requirement specifications. It i
s basically testing how well a system recovers from crashes, hardware failures,
or other catastrophic problems
VOLUME TESTING:
Volume testing is done against the efficiency of the application. Huge
amount of data is processed through the application (which is being tested) in
order to check the extreme limitations of the system.
Volume Testing, as its name implies, is testing that purposely subject
s a system (both hardware and software) to a series of tests where the volume of
data being processed is the subject of the test. Such systems can be transactio
ns processing systems capturing real time sales or could be database updates and
or data retrieval.
Volume testing will seek to verify the physical and logical limits to
a system's capacity and ascertain whether such limits are acceptable to meet th
e projected capacity of the organization s business processing.

DOMAIN TESTING
Domain testing is the most frequently described test technique. Some autho
rs write only about domain testing when they write about test design. The basic
notion is that you take the huge space of possible tests of an individual variab
le and subdivide it into subsets that are (in some way) equivalent. Then you tes
t a representative from each subset.

SCENARIO TESTING:

Scenario tests are realistic, credible and motivating to stakeholders, challengi


ng for the program and easy to evaluate for the tester. They provide meaningful
combinations of functions and variables rather than the more artificial combinat
ions you get with domain testing or combinatorial test design.

REGRESSION TESTING:
Regression testing is a style of testing that focuses on retesting after c
hanges are made. In traditional regression testing, we reuse the same tests (the
regression tests). In risk-oriented regression testing, we test the same areas
as before, but we use different (increasingly complex) tests. Traditional regres
sion tests are often partially automated. These note focus on traditional regres
sion.

BLACK BOX TESTING


REGRESSION TESTING

Regression testing is a style of testing that focuses on retesting after changes


are made. In traditional regression testing, we reuse the same tests (the regre
ssion tests). In risk-oriented regression testing, we test the same areas as bef
ore, but we use different (increasingly complex) tests. Traditional regression t
ests are often partially automated. These note focus on traditional regression.
Regression testing attempts to mitigate two risks:
*
A change that was intended to fix a bug failed.
*
Some change had a side effect, unfixing an old bug or introducing a new bu
g

Regression testing approaches differ in their focus. Common examples include:


Bug regression: We retest a specific bug that has been allegedly fixed.
Old fix regression testing: We retest several old bugs that were fixed, to see i
f they are back. (This is the classical notion of regression: the program has re
gressed to a bad state.)
General functional regression: We retest the product broadly, including areas th
at worked before, to see whether more recent changes have destabilized working c
ode. (This is the typical scope of automated regression testing.)
Conversion or port testing: The program is ported to a new platform and a subset
of the regression test suite is run to determine whether the port was successfu
l. (Here, the main changes of interest might be in the new platform, rather than
the modified old code.)
Configuration testing: The program is run with a new device or on a new version
of the operating system or in conjunction with a new application. This is like p
ort testing except that the underlying code hasn't been changed--only the extern
al components that the software under test must interact with.
Localization testing: The program is modified to present its user interface in a
different language and/or following a different set of cultural rules. Localiza
tion testing may involve several old tests (some of which have been modified to
take into account the new language) along with several new (non-regression) test
s.
Smoke testing also known as build verification testing:A relatively small suite
of tests is used to qualify a new build. Normally, the tester is asking whether
any components are so obviously or badly broken that the build is not worth test
ing or some components are broken in obvious ways that suggest a corrupt build o
r some critical fixes that are the primary intent of the new build didn't work.
The typical result of a failed smoke test is rejection of the build (testing of
the build stops) not just a new set of bug reports.

USER ACCEPTANCE TESTING:


In this type of testing, the software is handed over to the user in orde
r to find out if the software meets the user expectations and works as it is exp
ected to. In software development, user acceptance testing (UAT) - also called b
eta testing, application testing, and end user testing - is a phase of software
development in which the software is tested in the "real world" by the intended
audience.
UAT can be done by in-house testing in which volunteers or paid test su
bjects use the software or, more typically for widely-distributed software, by m
aking the test version available for downloading and free trial over the Web. Th
e experiences of the early users are forwarded back to the developers who make f
inal changes before releasing the software commercially.

ALPHA TESTING:
In this type of testing, the users are invited at the development cente
r where they use the application and the developers note every particular input
or action carried out by the user. Any type of abnormal behavior of the system i
s noted and rectified by the developers.
BETA TESTING:
In this type of testing, the software is distributed as a beta version
to the users and users test the application at their sites. As the users explore
the software, in case if any exception/defect occurs that is reported to the de
velopers. Beta testing comes after alpha testing. Versions of the software, know
n as beta versions, are released to a limited audience outside of the company.
The software is released to groups of people so that further testing can
ensure the product has few faults or bugs. Sometimes, beta versions are made a
vailable to the open public to increase the feedback field to a maximal number
of future users.

WHITEBOX TESTING:
White box testing strategy deals with the internal logic and structure of th
e code. White box testing is also called as glass, structural, open box or clear
box testing. The tests written based on the white box testing strategy incorpor
ate coverage of the code written, branches, paths, statements and internal logic
of the code etc.
In order to implement white box testing, the tester has to deal with the code an
d hence is needed to possess knowledge of coding and logic i.e. internal working
of the code. White box test also needs the tester to look into the code and fin
d out which unit/statement/chunk of the code is malfunctioning.

WHITE BOX TESTING


White box testing strategy deals with the internal logic and structure of the co
de. White box testing is also called as glass, structural, open box or clear box
testing. The tests written based on the white box testing strategy incorporate
coverage of the code written, branches, paths, statements and internal logic of
the code etc.
In order to implement white box testing, the tester has to deal with the code an
d hence is needed to possess knowledge of coding and logic i.e. internal working
of the code. White box test also needs the tester to look into the code and fin
d out which unit/statement/chunk of the code is malfunctioning.
Advantages of White box testing are:

* As the knowledge of internal coding structure is prerequisite, it becomes


very easy to find out which type of input/data can help in testing the applicati
on effectively.

* The other advantage of white box testing is that it helps in optimizing th


e code

* It helps in removing the extra lines of code, which can bring in hidden de
fects.
Disadvantages of white box testing are:
* As knowledge of code and internal structure is a prerequisite, a skilled t
ester is needed to carry out this type of testing, which increases the cost.

* It is nearly impossible to look into every bit of code to find out hidden
errors, which may create problems, resulting in failure of the application.
UNIT TESTING:

The developer carries out unit testing in order to check if the particul
ar module or unit of code is working fine. The Unit Testing comes at the very ba
sic level as it is carried out as and when the unit of the code is developed or
a particular functionality is built. Unit testing deals with testing a unit as a
whole. This would test the interaction of many functions but confine the test w
ithin one unit. The exact scope of a unit is left to interpretation. Supporting
test code, sometimes called scaffolding, may be necessary to support an individu
al test. This type of testing is driven by the architecture and implementation t
eams. This focus is also called black-box testing because only the details of th
e interface are visible to the test. Limits that are global to a unit are tested
here. In the construction industry, scaffolding is a temporary, easy to assembl
e and disassemble, frame placed around a building to facilitate the construction
of the building. The construction workers first build the scaffolding and then
the building. Later the scaffolding is removed, exposing the completed building.
Similarly, in software testing, one particular test may need some supporting so
ftware. This software establishes an environment around the test. Only when this
environment is established can a correct evaluation of the test take place. The
scaffolding software may establish state and values for data structures as well
as providing dummy external functions for the test. Different scaffolding softw
are may be needed from one test to another test. Scaffolding software rarely is
considered part of the system. Sometimes the scaffolding software becomes larger
than the system software being tested. Usually the scaffolding software is not
of the same quality as the system software and frequently is quite fragile. A sm
all change in the test may lead to much larger changes in the scaffolding. Inter
nal and unit testing can be automated with the help of coverage tools. A coverag
e tool analyzes the source code and generates a test that will execute every alt
ernative thread of execution. It is still up to the programmer to combine this t
est into meaningful cases to validate the result of each thread of execution. Ty
pically, the coverage tool is used in a slightly different way. First the covera
ge tool is used to augment the source by placing informational prints after each
line of code. Then the testing suite is executed generating an audit trail. Thi
s audit trail is analyzed and reports the percent of the total system code execu
ted during the test suite. If the coverage is high and the untested source lines
are of low impact to the system's overall quality, then no more additional test
s are required.

STATIC AND DYNAKIC ANALYSIS


Static analysis involves going through the code in order to find out any p
ossible defect in the code. Dynamic analysis involves executing the code and ana
lyzing the output.

STATEMENT COVERAGE:
In this type of testing the code is executed in such a manner that every st
atement of the application is executed at least once. It helps in assuring that
all the statements execute without any side effect.
BRANCH COVERAGE:
No software application can be written in a continuous mode of coding, at
some point we need to branch out the code in order to perform a particular funct
ionality. Branch coverage testing helps in validating of all the branches in the
code and making sure that no branching leads to abnormal behavior of the applic
ation.

SECURITY TESTING:

Security Testing is carried out in order to find out how well the system c
an protect itself from unauthorized access, hacking cracking, any code damage et
c. which deals with the code of application. This type of testing needs sophisti
cated testing techniques.

MUTATION TESTING:

A kind of testing in which, the application is tested for the code that
was modified after fixing a particular bug/defect. It also helps in finding out
which code and which strategy of coding can help in developing the functionalit
y effectively.

Das könnte Ihnen auch gefallen