Sie sind auf Seite 1von 31

SOFTWARE TESTING

● Software: Software, is any set of machine-readable


instructions (most often in the form of a computer program)
that directs a computer's processor to perform specific
operations.

● Testing: Testing is the process of evaluating a system or its


component(s) with the intent to find that whether it satisfies
the specified requirements or not.
SOFTWARE TESTING

● Process of evaluating the attributes and capabilities


of a program and determining that it meets user
requirements.
● Done to find defects

● To determine software is working as was required


(Validation & Verification)
● Reliability & Usability
● To measure performance (Non-functional Attributes)
SOFTWARE TESTING
● Software testing is a process used to identify the correctness,
completeness, and quality of developed computer software. It
includes a set of activities conducted with the intent of finding errors in
software so that it could be corrected before the product is released to
the end users.

● In simple words, software testing is an activity to check whether the


actual results match the expected results and to ensure that the
software system is defect free.

● Testing the completeness and correctness of a software.

Types of testing:

● Static & Dynamic


● Manual & Automation
● Functional & Non-Functional
OBJECTIVES OF SOFTWARE TESTING
● To ensure that the solution meets the business and user
requirements;
● To catch errors that can be bugs or defects;
● To determining user acceptability;
● To ensuring that a system is ready for use;
● To gaining confidence that it works;
● Improving the quality of software
● To measure the performance
● Evaluating the capabilities of a system to show that a
system performs as intended;
● To verify software certify to standards
● Validation & Verification
● Usability & Reliability
IMPORTANCE OF SOFTWARE TESTING

● To improve the quality, reliability & performance of


software.
● Software bugs can potentially cause monetary and
human loss, history is full of such examples
● This is China Airlines Airbus A300 crashing due to a
software bug on April 26, 1994 killing 264 innocent
lives
● In 1985,Canada's Therac-25 radiation therapy machine
malfunctioned due to software bug and delivered lethal
radiation doses to patients ,leaving 3 people dead and
critically injuring 3 others
● In April of 1999 ,a software bug caused the failure of a
$1.2 billion military satellite launch, the costliest
accident in history
● In may of 1996, a software bug caused the bank
accounts of 823 customers of a major U.S. bank to be
credited with 920 million US dollars
CHARACTERISTICS OF SOFTWARE TESTERS

● Be skeptical and question everything


● Don’t compromise on quality
● Think from user perspective
● Prioritize tests
● Start testing early
● Listen to all suggestions
● Identify and manage risks
● Develop analyzing skills
● Negative side testing
● Stop blaming others
● Be cool when bugs are rejected
● Clearly explain bugs
● Creativity/trouble shooters/explorers
RESPONSIBILITIES OF SOFTWARE
TESTERS
● Analyze requirements
● Prepare test plans
● Create test cases/test data
● Analyze test cases of other team members
● Execution of test cases
● Defect logging and tracking
● Providing complete information while logging bugs
● Summary reports
● Use cases creation
● Suggestions to improve the quality
● Communication with test lead/managers, clients,
business teams etc.
TESTING TERMINOLOGY
● Error/Mistake
● Bug/Defect/Fault
● Failure
● Black Box Testing
● White Box Testing
● Grey Box Testing
● Functional Testing
● Non Functional Testing
● Manual Testing
● Automation Testing
● Error/Mistake: Human action that produce an
incorrect result
● Bug/defect/fault: Flaw in component/ system that
can cause component/ system fail to perform its
required function
● Failure: Deviation of component/ system from its
expected delivery, services or result.
When can defects arise?
● Requirement phase
● Design Phase
● Build/Implementation phase
● Testing Phase
● After Release/maintenance phase
BLACK BOX TESTING
● The technique of testing without having any knowledge
of the interior workings of the application is Black Box
testing. The tester is oblivious to the system
architecture and does not have access to the source
code. Typically, when performing a black box test, a
tester will interact with the system's user interface by
providing inputs and examining outputs without
knowing how and where the inputs are worked upon.
WHITE BOX TESTING
● White box testing is the detailed investigation of internal
logic and structure of the code. White box testing is also
called glass testing or open box testing. In order to perform
white box testing on an application, the tester needs to
possess knowledge of the internal working of the code.
● The tester needs to have a look inside the source code and
find out which unit/chunk of the code is behaving
inappropriately.
GREY BOX TESTING
● Grey Box testing is a technique in which both black
box and white box testing is done or both black box
and white box testing techniques are involved.
FUNCTIONAL TESTING
● Functional Testing: Testing based on an
analysis of the specification of the
functionality of a component or system
● The process of testing to determine the
functionality of a software product.
NON FUNCTIONAL TESTING

● Testing the attributes of a component or


system that do not relate to functionality, e.g.
reliability, efficiency, usability,
maintainability and portability.
MANUAL TESTING
● Manual Testing: Testing the software
manually without using any automation tool
or executing the test cases manually.
AUTOMATION TESTING
● Automation Testing: Testing the software
using any automation tool or executing the
test cases using automation tools.
PRINCIPLES OF SOFTWARE TESTING

● Testing shows presence of defects:


● Exhaustive testing is impossible:
● Early testing:
● Defect clustering:
● Pesticide paradox:
● Testing is context depending:
● Absence – of – errors fallacy
TESTING SHOWS PRESENCE OF DEFECTS

● Testing can show the defects are present, but cannot


prove that there are no defects. Even after testing the
application or product thoroughly we cannot say that
the product is 100% defect free.
EXHAUSTIVE TESTING IS
IMPOSSIBLE
● Testing everything including all combinations of inputs and
preconditions is not possible. So, instead of doing the exhaustive
testing we can use risks and priorities to focus testing efforts. For
example: In an application in one screen there are 15 input fields,
each having 5 possible values, then to test all the valid
combinations you would need 30 517 578 125 (515) tests.
DEFECT CLUSTERING
● A small number of modules contains most of
the defects discovered during pre-release
testing or shows the most operational failures.
PESTICIDE PARADOX
● If the same kinds of tests are repeated again and again,
eventually the same set of test cases will no longer be
able to find any new bugs. To overcome this “Pesticide
Paradox”, it is really very important to review the test
cases regularly and new and different tests need to be
written to exercise different parts of the software or
system to potentially find more defects.
TESTING IS CONTEXT DEPENDENT

● Testing is basically context dependent.


Different kinds of sites are tested differently.
For example, safety – critical software is tested
differently from an e-commerce site.
ABSENCE – OF – ERRORS FALLACY

● If the system built have some defects then


still it can have high usability.
● Ex. Windows and Linux/Unix
EARLY TESTING
● In the software development life cycle testing
activities should start as early as possible and
should be focused on defined objectives
COST OF FIXING DEFECTS
RELATION BETWEEN TESTING AND
QUALITY
● In general: Quality is how well things are working and
are as per the requirements.
● In terms of IT: deliverables working after installation
without errors.
● Testing improves quality.

QUALITY is directly proportional to Testing.

Das könnte Ihnen auch gefallen