Sie sind auf Seite 1von 8

Illusions

about software testing



This presentation is intended to help you argue for the need to check and test
software. Many stakeholders involved in the software industry have illusions
about testing that hinder you to do a good job. There are management illusions,
developer illusions, tester illusions and user illusions.

The most common are:

Management illusions
o What is testing, at all
o Anybody can test
o You can test in quality
o Some products need no testing
o Automated testing vs. exploratory testing
Developer illusions
o My software has no bugs (or too few to care)
o The testers are going to find all bugs anyway
o Thorough unit testing is enough
Tester illusions
o I am allowed to stop bad software
o I need to test everything
o I can test everything
o If it works, it is good enough
o Thorough system testing is enough
Customer / User illusions
o Buy it and forget it
o We do not need to test
o Testers do not need to communicate with us

Some of these illusions have been argued against for a long time and it should be
known that they are wrong. However, as testers, we still run into them and this
creates trouble in our work. The final illusion is that the ones mentioned here are
the only ones. This is not true: There are many more of them.

Management illusions
What is testing

Testing is misunderstood, and because of this misunderstanding, people may
think very differently.

When software was new, people thought testing was an activity that would show
that software worked. Testing was demonstration, it was positive. Running the
program to show that it is right.


This is, to say the least, dangerous. It is theoretically and practically impossible
to show that a program is correct. Additionally, what is right for one user may be
wrong for another, depending on personal needs. We have to distinguish
between technical quality, where the testing task is verification and user
quality, where testing does validation. Psychologically, it is also problematic: If
people shall show that things are correct, they will do their best to do this and
they will tend to overlook problems that do not contribute to this goal.

As a reaction to this, testing was defined as the opposite: Testing is trying to
show that the software does NOT work. The slogan You make it, Ill break it!
Psychologically a much better definition. We try our best to find trouble. Only if
we do not find trouble, it is a side-effect that our trust in the software product
will increase. This definition also conforms to the usual way scientific advances
are criticized and then accepted: New theories are finally accepted after
comprehensive criticism. The trouble with this definition, however, is a kind of
war between testing and developing. This is detrimental to productivity.

Testing was then redefined: Testing is a measurement task: Measuring product
quality and/or measuring product risk. Thus, testing is independent of
development, tries to be objective and just collect information about the product
for whoever is interested. This sounds better. However, there may be a
motivational problem: If testers find no trouble, they may see their work as
boring. If product owners just get positive feedback, i.e. no trouble found, then
they may see test resources as a waste. On the other hand, it is not often true that
testers find no trouble, and there is value of information. Actually, one might say:
Testing is the KGB of the product owner. Its task is to provide information.
However, most people do not like the KGB.

Thus, testing needs another changed definition: Testing is any activity that
evaluates a product in order to provide information, feedback and contributing
to future improvement.

This subsumes that testing is not only pounding at the keyboard and looking at
the screen (difficult to do with most control programs anyway). But testing is
any evaluation done on any part of a product. It may be reading or reviewing
documents. It may be static analysis using tools (like spell checking documents
or analyzing code), or finally, it may be executing the product with inputs and
checking outcomes. The goal of it all is to provide information to product owners,
but also to provide feedback about what is done wrong, and helping people who
do this wrong to improve. Thus, testing is feedback to help learning. Every
problem found can be analyzed in order to find more of such problems in the
future (testing improvement) or in order to prevent occurrence of such
problems in the future (development improvement). The basis for finding
problems, however, is a critical attitude with any tester: It is best I look for
problems.

Many illusions about testing are prevented when we agree on a common
definition.

Anybody can test



This essentially means that testers are second class citizens. Whoever is less
educated or less skillful, can be used to test software. The trouble is that testers
need to be highly skilled and knowledgeable, but in very different domains: Test
techniques, using test tools, test automation, program development (in order to
know what could be wrong but also in order to program test scripts) and
product domain. A knowledgeable and skillful tester will find more bugs more
efficiently. Good people save money.

You can test in quality



This illusion is as old as product development. In the hardware industry they
have learnt this, the hard way. With software we still have to learn it. Bugs cost
more the later they are found. When fixing bugs there is the risk of introducing
new bugs and side-effects. This increases with the size and age of the product.
Finding bugs by late test execution is not very productive and dangerous. If
testing reveals serious unreliability it may be too late to fix the bugs. Bugs should
be found reviewing specifications and design, not executing test cases. If
something is unreliable it should rather be thrown away and built anew from
scratch. Otherwise, bad solutions are carried on and will cost too much later.

Finally, if the solution is all wrong, no fixing of symptoms after a test will help
anyway.

However, there is a chance to IMPROVE quality if the product entering test is not
too bad! If we have a product that is reasonable, we may shake out some more
bugs by late exploratory testing and this way improve quality. And we should do
so. We do not find every bug by systematic testing, automated testing or reviews
or static analysis. Some are only found by dynamic testing using peoples brain.
(Brain-in-the-loop testing.) But using this kind of testing as the only means is
normally a waste of time.

Some products need no testing



A recent trend is releasing software with faults. In many of the most popular
internet services, the users find the problems and these are fixed in production
use. However, people do not die from bugs in such products. And still, there is a
lot of testing in such products before release.

Human work is error-prone. Whatever humans do, there is the effect of human
error, thus if something is too dangerous in use, there must be testing. Testing
actually means using any technique there is in order to find information on
possible defects. Part of this is reviewing, part is using automatic checking or
analysis tools, part of it is designing and running test cases. If none of these
methods is applied, it is unknown if anything could be wrong and the resulting
risk is, in most cases, far too high.

Thus, what is testing?



Testing is using any analysis technique in order to get information about the test
object. Testing is measuring the risk of using the test object. The tester is the
product owners KGB. But testing is also learning: Learning how to prevent bugs
and how to find them earlier and more completely. This way, testing contributes
to continuous improvement.

Automated testing vs. exploratory testing



Actually, there are two illusions:

We have so much automated testing, thus we do not need manual testing and
Exploratory testing, mainly done manually, finds so many bugs, thus automated
testing is a waste of resources.

Automated testing IS important. It gives us the possibility to test the main
features of software for regressions, as well as try the software with differing
settings and platforms. Without automated testing, it is definitely not safe to try
changing anything. With ever increasing complexity and ever increasing
possibilities to introduce bugs, we have to use every means of fighting them.
Automated testing is one of our chances to do so.

However, automated testing only checks what it is told to check. Telling it means
programming. A tester has to predict what is important to check. Many features
are necessarily left our. Additionally there is a concern for maintainability of
automated test cases and scripts. This often leads to relatively simple tests,
testing one feature at a time, and to leaving the user interface out of the loop,
testing through APIs. Manual exploratory testing, on the other hand, keeps the
brain in the loop. The human brain is adaptable and human testers will check
many things an automated script will not check. Additionally, exploratory tests
may check many features at a time, touring through the applications in different
ways, simulating real user actions. However, exploratory testing becomes a
hopeless undertaking if the product is too unstable. Thus, both techniques need
to be applied.

Developer illusions
My software has no bugs (or too few to care)

This corresponds to the management belief that some products do not need
testing. Humans err, developers err. Developers need the other pair of eyes a
tester would give. A typical developer introduces about 1 bug for every five lines
of code. Even the best ones are only a factor of 20 better. This would be 1 bug in
100 lines of code. That bug may be fatal! If it is not worth to care about, what
about the user of the product?
Users care if the software does not work. Developers believing in their own
infallibility need especially much reviewing.

The testers are going to find all bugs anyway


No. According to industry statistics collected by Capers Jones, Productivity
Research, no testing method short of high volume beta testing can be relied on to
find more than 50% of the bugs. Even with six phases of the best testing, 3% of
the bugs will then still survive. And testing is like filter: Garbage in leads to
garbage out. The more bugs in, the more bugs out. Bugs also take time from
testing. The more bugs are found, the more work is used to isolate bugs,
document them, fix them and retest and regression test. Thus less testing is done,
and less of the remaining bugs are found. Believing in the testers is dangerous
risk compensation. Introducing start criteria to testing may be a solution.

Through unit testing is enough


In agile methods, a lot of automated unit and continuous integration testing may
be done. Many people believe this is enough, However, this testing is directed
against coding and interface bugs. The total system may still not meet the
requirements.

Every test level has its own special objectives. Good unit testing reduces the bug
content in the system, but testing the parts does not guarantee that the whole is
good. (Anyway, testing cannot guarantee anything anyway).

Tester illusions
I am allowed to stop bad software

Beware of being the quality police! Releasing software is a plain business
decision, to be made by the product owner. A testers task is to provide
information, to measure the risk of releasing. The decision to release or not
depends on the risk, but also factors beyond our control, like the state of the
market. Sometimes products full of bugs may be released because otherwise a
market window would disappear. IN that case it may be better to get product
sales going and fix the problems later, even if that might be expensive.

Testers may, however, show possibilities for action. Such possibilities should be
used during the end game, right before release. For examples, if some features
are full of serious bugs, they might be removed or disabled, or simplified. If they
cannot be cut out, the testers may propose to postpone release. But the testers
do not decide on release. A test report is a statement about the risk to release
only.

I need to test everything



Testing everything is impossible. This is exhaustive testing. Testing everything
would mean to test every combination of inputs, environment, timing and states,

which creates an astronomical amount of test cases. Testing a program in a 32-


but architecture with only one integer input would require 232 test cases.

But testers may test everything once. This may be testing every feature, every
requirement, every window, dialog box, field, batch job, background activity or
whatever is visible. Testers may continue combining this, first pair wise, then
further on, as far as they consider necessary. But any kind of combination testing
will explode the number of test cases and thus be expensive.

A test will always be a very small sample with regard to the total possibilities,
and bugs may go unnoticed. Statistically speaking, that sample is not even valid.
But this is the best we can do.

I can test everything



This is plainly not true. See the previous illusion of details. If a tester tells this,
she is either ignorant or the program is trivial. Even with automated tests this
can never be achieved.

If it works, it is good enough



Well, it depends. If the purpose is a product demonstration at a conference, with
full control by the demonstrator, then this may be true. If a product shall be
released to real users, this is never true. Users do all sorts of things. They seldom
behave the same way as we think. They do things in different orders, repeat
tings, leave out things and do many things wrong. All this needs to be checked
out by testers. Even if we run extensive automated tests, designed using all the
best techniques available, exploratory testing will in most cases find some new
bugs, just because we introduce new variation. Users would do this. Thus, testing
never gives full safety.

Thorough system testing is good enough



System testing is testing the whole integrated system. It is checking that the
system conforms to requirements. The trouble is in the details: Testing every
requirement one or a few times will probably leave many parts of the code
outside of testing. The parts typically left out unknowingly by system testers are
error and exception handling. As developers tend to introduce more bugs in
these parts than in the mainstream code, these parts are especially error-prone
and should thus be tested more. This is often impossible in system testing.

Many testers who have tested system where unit and integration test was badly
or not done, can report on problems. It is often frustrating to test a system which
is not robust and where trivial bugs show up. As a tester, one gets the feeling of
being misused to clean up other peoples mess.

Customer / User illusions


Buy it and forget it

This is a dangerous one! Some users think developers know what to do. They
also think they themselves do not know how to acceptance test and let the
developing organization do this task. The result is often a disaster: The system
does something, but not what the customer intended to have. Developers will
implement what they think the customer wanted, not what the customer
REALLY wanted. In Norwegian we say Bukken passer havresekken.

Modern methods all emphasize that the customer or a customer-like person
must be part of the project work all the time, in order to specify what they want,
in order to see what happens, in order to be available if developers ask, and in
order to make sure that there are reasonable acceptance test cases. If a customer
chooses not to be available, development is, to say the least, at a high risk.

Customers need to be available and are responsible to design acceptance test
scenarios.

We do not need to test



Why? Because the software will do what we need anyway? If a product is
expensive enough, it is better to check first. The need to test only disappears
with very cheap standard products where we can base our decision on the
decision many others have made before, and on their experience. Otherwise we
have to test. This is a variation of the buy it and forget it illusion.

A variation of this, from Berit Hatten: Customer: We do not need to check how it
works at our place / in our organization... etc. It worked with a friend of mine...

Would this work on YOUR platform? With YOUR use?

And another one: Customer manager: Eh do you need to test? Do you sell bad
software? I think the best answer is not to sell to such customers. Or, as an
alternative, after very careful explanation of the above illusion.

Testers do not need to communicate with us



This means the testers will not know what we, the customers, want. This also
means that we do not get the knowledge of testers about what could be
interesting to have a look at. If it is not important that a product is reliable and
meets expectations, then communication with testers is not important.
Otherwise it is. There should be a lot of communication between customer, tester
and developer.

References


There are definitely more illusions than the ones mentioned here. There are also
many more references where illusions may be found, as well as advice what to
do about them. The following, however, were the main inspirations for this talk.

(1) Gerald Weinberg, Perfect Software and other illusion about testing,
Dorset House, 2008. This book is inspiring. The author describes many
more illusions and fallacies and how to fight them. He also describes
much of the psychological reasons behind.
(2) James A. Whittaker, Exploratory Software Testing, Addison-Wesley 2010.
This is a great book about testing using improvisation and peoples brain.
It also does away with the illusion that no other testing than this is
necessary.
(3) Lisa Crispin and Tip House, Testing Extreme Programming, Addison-
Wesley 2003. This book is rather technical, showing how to implement
automated unit testing. However, it emphasizes that testers are not
vacuum cleaners, that they have a right to a minimum quality in what
they have to test,

Das könnte Ihnen auch gefallen