Sie sind auf Seite 1von 26

Software Testing

Software
Testing

1. Introduction to Software Testing


Software:

Software is a set of instructions to perform some task. Software is used in many applications
of the real world. Some of the examples are Application software, such as word processors
Firmware in a embedded system Middleware, which controls and co-ordinates distributed
systems System software such as operating systems Video Games Websites All of these
applications need to run without any error and provide a quality service to the user of the
application. In this regard the software has to be tested for its accurate and correct working.

Software Testing:
Testing can be defined in simple words as Performing Verification and Validation of the
Software Product for its correctness and accuracy of working. Other definitions of Software
Testing:
Software testing is an investigation conducted to provide stakeholders with information
about the quality of the product or service under test. Software Testing also ensures whether
the software program/application/product:
Meets the business and technical requirements that guided its design and development Works
as expected; and Can be implemented with the same characteristics. Testing is done manually
or using automated tools. Testing is done by a separate group of Testers. Testing is done
right from the beginning of the software development life cycle till the end; it is
delivered to the customer.

Functional Vs non-functional testing:


Functional testing refers to tests that verify a specific action or function of the code. These are
usually found in the code requirements documentation, although some development
methodologies work from use cases or user stories. Functional tests tend to answer the
question of "can the user do this" or "does this particular feature work". Non-functional
testing refers to aspects of the software that may not be related to a specific function or user
action, such as scalability or security.
Non-functional testing tends to answer such questions as "how man y people can log in at
once", or "how easy is it to hack this software".

Page | 1

Software Testing
Software
Testing

2. Error, Fault and Failure:


ERRORHumans make errors in their thoughts, actions, and in the products that might result from
their actions. Errors occur in the process of writing a program. A programmer makes an
error(mistake),which results in a defect(fault,bug) in the software source code. If this defect is
executed, in certain situations the system will produce wrong results, causing a failure.
Not all defects will necessarily result in failures.For example, defects in dead code will never
result in failures. A defect can turn into a failure when the environment is changed.
Examples of these changes in environment include the software being run on a new
hardware platform, alterations in source data or interacting with different software. A single
defect may result in a wide range of failure symptoms. Not all software defects are caused by
coding errors. One common source of expensive defects is caused by requirement gaps, e.g.,
unrecognized requirements that result in errors of omission by the program designer. A
common source of requirements gaps is non-functional requirements such as testability,
scalability, maintainability, usability, performance, and security.
Examples
Incorrect usage of software by users.
Bad architecture and design by architects and designers.
Bad programming by developers.
Inadequate testing by testers.
Wrong build using incorrect configuration items by Build Team Member.

Fault Examples
A fault is the manifestation of one or more errors.
An incorrect statement.
Wrong data type.
Wrong mathematical formula in design document.
Missing functionality in the system.

FAILURE
A failure occurs when a faulty piece of code is executed leading to incorrect state that
propagates to the programs output. The following figure tells us how Error made by human
will result in failure of the software.

Page | 2

Software Testing
Software
Testing
Figure (1) to understand Error, Fault and Failure

Page | 3

Software Testing
Software
Testing

Software Testing Objectives:


1. Testing is done to fulfill certain objectives.
2. To discuss the distinctions between validation testing and defect testing.
3. To describe the principles of system and component testing.
4. To describe strategies for generating system test cases.
5. To understand the essential char acteristics of tool used for test automation.
6. To find or prevent defects.
7. To determine that software products satisfy specified requirements.
8. Ensuring that a system is ready for use.
9. Gaining confidence that it works.
10. Providing information about the level of quality.
11. Determining user acceptability.
Software quality measures how well software is designed (quality of design), and how
well the software conforms to that design (quality of conformance).

Page | 4

Software Testing
Software
Testing

3. Software quality & Attributes:


1) Conformance to specification:
Quality that is defined as a matter of products and services whose measurable characteristics
satisfy a fixed specification that is, conformance to an in beforehand defined specification.

2)Meeting customer needs:


Quality that is identified independent of any measurable characteristics. That is, quality is
defined as the products or services capability to meet customer expectations explicit or not.
Software quality is a multidimensional quantity and is measurable. To do this, we need to
divide and measure software quality in terms of quality attributes:

Static Quality Attributes


Dynamic Quality Attributes
The following figure shows the different quality attributes:

Page | 5

Software Testing
Software
Testing
Software Quality Attributes:

Software Quality Attributes:


- Static Attributes:
1. Maintainability:
In software engineering, the case with which a software product can be modified in order to:
correct defects. meet new requirements. make future maintenance easier, or cope with a
changed environment. These activities are known as software maintenance. A set of attributes
that bear on the effort needed to make specified modifications are:

1.1 Analyzability:
Attributes of software that bear on the effort needed for diagnosis of deficiencies or causes of
failures, or for identification of parts to be modified

1.2 Changeability:
Attributes of software that bear on the effort needed for modification, fault removal or for
environmental change.

1.3 Stability:
Attributes of software that bear on the risk of unexpected effect of modifications

1.4 Testability:
Attributes of software that bear on the effort needed for validating the modified software. The
degree to which a system or component facilitates the establishment of test criteria and the
performance of tests to determine whether those criteria are met
Static Testability Ex: Software Complexity
Page | 6

Software Testing
Software
Testing
Dynamic Testability Ex: Test Coverage Criteria

- Dynamic Attributes:
1. Completeness:
The availability of all the features listed in the requirements or in the user manual.

2. Consistency:
adherence to a common set of conventions and assumptions.

2.1 Compliance:
Attributes of software that make the software adhere to application related standards or
conventions or regulations in laws and similar prescriptions.

2.2 Conformance :
Attributes of software that make the software adhere to standards or conventions relating to
portability.

3. Usability:
The ease with which an application can be used. Usability testing also refers to testing
of a product by its potential users.

3.1 Understandability:
Attributes of software that bear on the users' effort for recognizing the logical concept and its
applicability.

3.2 Learnability:
Attributes of software that bear on the users' effort for learning its application.

3.3 Operability :
Attributes of software that bear on the users' effort for operation and operation control.

4. Performance:
The time the application takes to perform a requested task. Performance is considered as a
non-functional requirement.

4.1 Time behavior:


Attributes of software that bear on response and processing times and on throughput rates in
performances its function.

4.2 Resource behavior:


Page | 7

Software Testing
Software
Testing
Attributes of software that bear on the amount of resource used and the duration of such use
in performing its function

5. Reliability:
Software Reliability is the probability of failure-free operation of software over a given time
interval and under given conditions

5.1 Maturity:
Attributes of software that bear on the frequency of failure by faults in the software.

5.2 Fault tolerance:


Attributes of software that bear on its ability to maintain a specified level of performance in
case of software faults or of infringement of its specified interface.

5.3 Recoverability:
Attributes of software that bear on the capability to re-establish its level of performance and
recover the data directly affected in case of a failure and on the time and effort needed for it.

6. Correctness:
The correct operation of an application

6.1 Accurateness:
Attributes of software that bear on the provision of right or agreed results or effects

6.2 Suitability:
Attributes of software that bear on the presence and appropriateness of a set of functions for
specified tasks

7. Correctness:
It attempts to establish that the program is error- free, testing attempts to find if there are
any errors in it.

Page | 8

Software Testing
Software
Testing

4.Testing methods:
Static vs. dynamic testing :
There are many approaches available in software testing. Reviews, walkthroughs, or
inspections are referred to as static testing, whereas actually executing programmed code
with a given set of test cases is referred to as dynamic testing. Static testing is often implicit,
as proofreading, plus when programming tools/text editors check source code structure or
compilers (pre-compilers) check syntax and data flow as static program analysis. Dynamic
testing takes place when the program itself is run. Dynamic testing may begin before the
program is 100% complete in order to test particular sections of code and are applied to
discrete functions or modules. Typical techniques for this are either using stubs/drivers or
execution from a debugger environment. Static testing involves verification, whereas
dynamic testing involves validation. Together they help improve software quality. Among the
techniques for static analysis, mutation testing can be used to ensure the test-cases will detect
errors which are introduced by mutating the source code.

The box approach testing:


Software testing methods are traditionally divided into white- and black-box testing. These
two approaches are used to describe the point of view that a test engineer takes when
designing test cases.

White-Box testing:
White-box testing (also known as clear box testing, glass box testing, transparent box testing
and structural testing) tests internal structures or workings of a program, as opposed to the
functionality exposed to the end-user. In white-box testing an internal perspective of the
system, as well as programming skills, are used to design test cases. The tester chooses inputs
to exercise paths through the code and determine the appropriate outputs. This is analogous to
testing nodes in a circuit, e.g. in-circuit testing (ICT). While white-box testing can be applied
at the unit, integration and system levels of the software testing process, it is usually done at
the unit level. It can test paths within a unit, paths between units during integration, and
between subsystems during a systemlevel test. Though this method of test design can
uncover many errors or problems, it might not detect unimplemented parts of the
specification or missing requirements.

Techniques used in white-box testing include:

API testing (application programming interface) testing of the application using


public and private APIs.

Code coverage creating tests to satisfy some criteria of code coverage (e.g.,
the test designer can create tests to cause all statements in the program to be
executed at least once).
Page | 9

Software Testing
Software
Testing

Fault injection methods intentionally introducing faults to gauge the efficacy


of testing strategies.

Mutation testing methods.

Static testing methods.

Code coverage tools can evaluate the completeness of a test suite that was created with any
method, Including black-box testing. This allows the software team to examine parts of a
system that are rarely tested and ensures that the most important function points have been
tested. Code coverage as a software metric can be reported as a percentage for:

Function coverage, which reports on functions executed.

Statement coverage, which reports on the number of lines executed to


complete the test.

100% statement coverage ensures that all code paths or branches (in terms of
control flow) are executed at least once. This is helpful in ensuring Correct
functionality, but not sufficient since the same code may process different inputs
correctly or incorrectly.

Black-box testing:

Black box diagram


Black-box testing treats the software as a "black box", examining functionality without any
knowledge Of internal implementation. The testers are only aware of what the software is
supposed to do, not how it does it. Black-box testing methods include: equivalence
partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table
testing, fuzz testing, model-based testing, use case testing, exploratory testing and
specification-based testing.
Specification-based testing aims to test the functionality of software according to the
applicable requirements. This level of testing usually requires thorough test cases to be
provided to the tester, who then can simply verify that for a given input, the output value (or
behavior), either "is" or "is not" the same as the expected value specified in the test case. Test
cases are built around specifications and requirements, i.e., what the application is supposed
to do. It uses external descriptions of the software, including specifications, requirements, and
Page | 10

Software Testing
Software
Testing
designs to derive test cases. These tests can be functional or non-functional, though usually
functional. Specification-based testing may be necessary to assure correct functionality, but it
is insufficient to guard against complex or high-risk situations. One advantage of the black
box technique is that no programming knowledge is required. Whatever biases the
programmers may have had, the tester likely has a different set and may emphasize different
areas of functionality.
On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth
without a flashlight. Because they do not examine the source code, there are situations when
a tester writes many test cases to check something that could have been tested by only one
test case, or leaves some parts of the program untested. This method of test can be applied to
all levels of software testing: unit, integration, system and acceptance. It typically comprises
most if not all testing at higher levels, but can also dominate unit testing as well.

Visual testing:
The aim of visual testing is to provide developers with the ability to examine what was
happening at the point of software failure by presenting the data in such a way that the
developer can easily find the information he or she requires, and the information is expressed
clearly.
At the core of visual testing is the idea that showing someone a problem (or a test failure),
rather than just describing it, greatly increases clarity and understanding. Visual testing
therefore requires the recording of the entire test process capturing everything that occurs
on the test system in video format. Output videos are supplemented by real-time tester input
via picture-in-a-picture webcam and audio commentary from microphones.Visual testing
provides a number of advantages. The quality of communication is increased dramatically
because testers can show the problem (and the events leading up to it) to the developer as
opposed to just describing it and the need to replicate test failures will cease to exist in many
cases. The developer will have all the evidence he or she requires of a test failure and can
instead focus on the cause of the fault and how it should be fixed.
Visual testing is particularly well-suited for environments that deploy agile methods in their
development of software, since agile methods require greater communication between testers
and developers and collaboration within small teams. Ad hoc testing and exploratory testing
are important methodologies for checking software integrity, because they require less
preparation time to implement, while the important bugs can be found quickly. In ad hoc
testing, where testing takes place in an improvised, impromptu way, the ability of a test tool
to visually record everything that occurs on a system becomes very important.
Visual testing is gathering recognition in customer acceptance and usability testing, because
the test can be used by many individuals involved in the development process. For the
customer, it becomes easy to provide detailed bug reports and feedback, and for program
Page | 11

Software Testing
Software
Testing
users, visual testing can record user actions on screen, as well as their voice and image, to
provide a complete picture at the time of software failure for the develop.

Grey-box testing:
Grey-box testing (American spelling: gray-box testing) involves having knowledge of
internal data structures and algorithms for purposes of designing tests, while executing those
tests at the user, or black-box level.
The tester is not required to have full access to the software's source code.Manipulating
input data and formatting output do not qualify as grey-box, because the input
and output are clearly outside of the "black box" that we are calling the system
under test. This distinction is particularly important when conducting integration
testing between two modules of code written by two different developers, where
only the interfaces are exposed for test.

However, tests that require modifying a back-end data repository such as a database or a log
file does qualify as grey-box, as the user would not normally be able to change the data
repository in normal production operations. Grey-box testing may also include reverse
engineering to determine, for instance, boundary values or error messages.
By knowing the underlying concepts of how the software works, the tester makes
better-informed testing choices while testing the software from outside. Typically,
a grey-box tester will be permitted to set up an isolated testing environment with
activities such as seeding a database.
The tester can observe the state of the product being tested after performing
certain actions such as executing SQL statements against the database and then
executing queries to ensure that the expected changes have been reflected.
Grey-box testing implements intelligent test scenarios, based on limited
information. This will particularly apply to data type handling, exception
handling, and so on.

Page | 12

Software Testing
Software
Testing

5. Testing levels:
There are generally four recognized levels of tests: unit testing, integration testing, system
testing, and acceptance testing. Tests are frequently grouped by where they are added in the
software development process, or by the level of specificity of the test.
The main levels during the development process as defined by the SWEBOK guide are unit-,
integration-, and system testing that are distinguished by the test target without implying a
specific process model. Other test levels are classified by the testing objective.

Unit testing:
Unit testing, also known as component testing, refers to tests that verify the functionality of a
specific section of code, usually at the function level.
In an object-oriented environment, this is usually at the class level, and the minimal unit tests
include the constructors and destructors. These types of tests are usually written by
developers as they work on code (white-box style),to ensure that the specific function is
working as expected.
One function might have multiple tests, to catch corner cases or other branches in the code.
Unit testing alone cannot verify the functionality of piece of software, but rather is used to
ensure that the building blocks of the software work independently from each other.
Unit testing is a software development process that involves synchronized application of a
broad spectrum of defect prevention and detection strategies in order to reduce software
development risks, time, and costs. It is performed by the software developer or engineer
during the construction phase of the software development lifecycle. Rather than replace
Page | 13

Software Testing
Software
Testing
traditional QA focuses, it augments it. Unit testing aims to eliminate construction errors
before code is promoted to QA; this strategy is intended to increase the quality of the
resulting software as well as the efficiency of the overall development and QA process.
Depending on the organization's expectations for software development, unit testing might
include static code analysis, data flow analysis, metrics analysis, peer code reviews, code
coverage analysis and other software verification practices.

Integration testing:
Integration testing is any type of software testing that seeks to verify the interfaces between
components against a software design. Software components may be integrated in an iterative
way or all together ("big bang"). Normally the former is considered a better practice since it
allows interface issues to be located more quickly and fixed. Integration testing works to
expose defects in the interfaces and interaction between integrated components (modules).
Progressively larger groups of tested software components corresponding to elements of the
architectural design are integrated and tested until the software works as a system.

Component interface testing:


The practice of component interface testing can be used to check the handling of data passed
between various units, or subsystem components, beyond full integration testing between
those units. The data being passed can be considered as "message packets" and the range or
data types can be checked, for data generated from one unit, and tested for validity before
being passed into another unit.
One option for interface testing is to keep a separate log file of data items being passed, often
with a timestamp logged to allow analysis of thousands of cases of data passed between units
for days or weeks.
Tests can include checking the handling of some extreme data values while other interface
variables are passed as normal values. Unusual data values in an interface can help explain
unexpected performance in the next unit.
Component interface testing is a variation of black-box testing, with the focus on the data
values beyond just the related actions of a subsystem component.

System testing:
System testing, or end-to-end testing, tests a completely integrated system to
verify that it meets its requirements. For example, a system test might involve
Page | 14

Software Testing
Software
Testing
testing a logon interface, then creating and editing an entry, plus sending or printing

results, followed by summary processing or deletion (or archiving) of entries, then logoff.
In addition, the software testing should ensure that the program, as well as
working as expected, does not also destroy or partially corrupt its operating
environment or cause other processes within that environment to become
inoperative (this includes not corrupting shared memory, not consuming or
locking up excessive resources and leaving any parallel processes unharmed by
its presence).

Acceptance testing:
At last the system is delivered to the user for Acceptance testing.

Page | 15

Software Testing
Software
Testing

6. Testing Types:
Installation testing:
An installation test assures that the system is installed correctly and working at
actual customer's hardware

Compatibility testing:
A common cause of software failure (real or perceived) is a lack of its
compatibility with other application software, operating systems (or operating
system versions, old or new), or target environments that differ greatly from the
original (such as a terminal or GUI application intended to be run on the desktop
now being required to become a web application, which must render in a web
browser). For example, in the case of a lack of backward compatibility, this can
occur because the programmers develop and test software only on the latest
version of the target environment, which not all users may be running. This
results in the unintended consequence that the latest work may not function on
earlier versions of the target environment, or on older hardware that earlier
versions of the target environment was capable of using. Sometimes such issues
can be fixed by proactively abstracting operating system functionality into a
separate program module or library.

Regression testing:
Regression testing focuses on finding defects after a major code change has occurred.
Specifically, it seeks to uncover software regressions, as degraded or lost features, including
old bugs that have come back. Such regressions occur whenever software functionality that
was previously working,
correctly, stops working as intended. Typically, regressions occur as an unintended
consequence of program changes, when the newly developed part of the software collides
with the previously existing code. Common methods of regression testing include re-running
previous sets of test-cases and checking whether previously fixed faults have re-emerged. The
depth of testing depends on the phase in the release process and the risk of the added features.
They can either be complete, for changes added late in the release or deemed to be risky, or
be very shallow, consisting of positive tests on each feature, if the changes are early in the
release or deemed to be of low risk. Regression testing is typically the largest test effort in
commercial software development, due to checking numerous details in prior software
features, and even new software can be developed while using some old test-cases to test
parts of the new design to ensure prior functionality is still supported.

Acceptance testing:
Page | 16

Software Testing
Software
Testing
Acceptance testing can mean one of two things:
1. A smoke test is used as an acceptance test prior to introducing a new build to the main
testing process, i.e. before integration or regression.
2. Acceptance testing performed by the customer, often in their lab environment on their own
hardware, is known as user acceptance testing (UAT). Acceptance testing may be performed
as part of the hand-off process between any two phases of development.

Alpha testing:
Alpha testing is simulated or actual operational testing by potential users/customers or an
independent test team at the developers' site. Alpha testing is often employed for off-the-shelf
software as a form of internal acceptance testing, before the software goes to beta testing.

Beta testing:
Beta testing comes after alpha testing and can be considered a form of external
user acceptance testing. Versions of the software, known as beta versions, are
released to a limited audience outside of the programming team. The software is
released to groups of people so that further testing can ensure the product has
few faults or bugs. Sometimes, beta versions are made available to the open
public to increase the feedback field to a maximal number of future users.

Functional vs non-functional testing:


Functional testing refers to activities that verify a specific action or function of the code.
These are usually found in the code requirements documentation, although some
development methodologies work from use cases or user stories. Functional tests tend to
answer the question of "can the user do this" or "does this particular feature work." Nonfunctional testing refers to aspects of the software that may not be related to a specific
function or user action, such as scalability or other performance, behavior under certain
constraints, or security. Testing will determine the breaking point, the point at which
extremes of scalability or performance leads to unstable execution.
Non-functional requirements tend to be those that reflect the quality of the product,
particularly in the context of the suitability perspective of its users.

Security testing:
Security testing is essential for software that processes confidential data to prevent system
intrusion by hackers.

Page | 17

Software Testing
Software
Testing

Testing process:
Top-down and bottom-up:
Bottom Up Testing is an approach to integrated testing where the lowest level components
(modules, procedures, and functions) are tested first, then integrated and used to facilitate the
testing of higher level components. After the integration testing of lower level
integrated modules, the next level of modules will be formed and can be used for integration
testing. The process is repeated until the components at the top of the hierarchy are tested.
This approach is helpful only when all or most of the modules of the same development level
are ready.
This method also helps to determine the levels of software developed and makes it easier to
report testing progress in the form of a percentage. Top Down Testing is an approach to
integrated testing where the top integrated modules are tested and the branch of the module is
tested step by step until the end of the related module. In both, method stubs and drivers are
used to stand-in for missing components and are replaced as the levels are completed.

A sample testing cycle:


Although variations exist between organizations, there is a typical cycle for testing. The
sample below is common among organizations employing the Waterfall development model.
The same practices are commonly found in other development models, but might not be as
clear or explicit.
Requirements analysis: Testing should begin in the requirements phase of the software
development life cycle. During the design phase, testers work to determine what aspects of a
design are testable and with what parameters those tests work.
Test planning: Test strategy, test plan, testbed creation. Since many activities will be carried
out during testing, a plan is needed.
Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use
in Testing software.
Test execution: Testers execute the software based on the plans and test documents then
report any errors found to the development team.
Test reporting: Once testing is completed, testers generate metrics and make final reports on
their test effort and whether or not the software tested is ready for release.
Test result analysis: Or Defect Analysis, is done by the development team usually along with
the client, in order to decide what defects should be assigned, fixed, rejected (i.e. found
software working properly) or deferred to be dealt with later.

Page | 18

Software Testing
Software
Testing
Defect Retesting: Once a defect has been dealt with by the development team, it is retested
the testing team. AKA Resolution testing.
Regression testing: It is common to have a small test program built of a subset of tests,for
each integration of new, modified,or fixed software, in order to ensure that the latest delivery
has not ruined anything, and that the software product as a whole is still working correctly.
Test Closure: Once the test meets the exit criteria, the activities such as capturing the key
outputs, lessons learned, results, logs, documents related to the project are archived and used
as a reference for future projects.

Automated testing:
Many programming groups are relying more and more on automated testing, especially
groups that use test-driven development. There are many frameworks to write tests in, and
continuous integration software will run tests automatically every time code is checked into a
version control system.
While automation cannot reproduce everything that a human can do (and all the
ways they think of doing it), it can be very useful for regression testing. However,
it does require a well-developed test suite of testing scripts in order to be truly
useful.

Testing Artifacts:
The software testing process can produce several artifacts.

Test plan
A test specification is called a test plan. The developers are well aware what test plans will be
executed and this information is made available to management and the developers. The idea
is to make them more cautious when developing their code or making additional changes.
Some companies have a higher-level document called a test strategy.

Traceability matrix
A traceability matrix is a table that correlates requirements or design documents to test
documents. It is used to change tests when related source documents are changed, to select
test cases for execution when planning for regression tests by considering requirement
coverage.

Test case
A test case normally consists of a unique identifier, requirement references from a design
specification, preconditions, events, a series of steps (also known as actions) to follow, input,

Page | 19

Software Testing
Software
Testing
output, expected result,and actual result. Clinically defined a test case is an input and an
expected result.
This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases
described in more detail the input scenario and what results might be expected. It can
occasionally be a series of steps (but often steps are contained in a separate test procedure
that can be exercised against multiple test cases, as a matter of economy) but with one
expected result or expected outcome. The optional fields are a test case ID, test step, or order
of execution number, related requirement(s), depth, test category, author, and check boxes for
whether the test is automatable and has been automated. Larger test cases may also contain
prerequisite states or steps, and descriptions.
A test case should also contain a place for the actual result. These steps can be stored in a
word processor document, spreadsheet, database, or other common repository. In a database
system, you may also be able to see past test results, who generated the results, and what
system configuration was used to generate those results. These past results would usually be
stored in a separate table.

Correctness Vs Reliability:
The correctness will be established via requirement specification and the program text to
prove that software is behaving as expected. The reliability is the probability of the successful
execution of program on randomly selected elements from its input domain. Though
correctness of a program is desirable, it is almost never the objective of testing. To establish
correctness via testing would imply testing a program on all elements in the input domain. In
most cases that are encountered in practice, this is impossible to accomplish. Thus correctness
is established via mathematical proofs of programs. While correctness attempts to establish
that the program is error free, testing attempts to find if there are any errors in it. Thus
completeness of testing does not necessarily demonstrate that a program is error Free.

Testing Vs Debugging:
Testing is the process of determining if a program has any errors. When testing reveals an
error, the process used to determine the cause of this error and to remove it, is known as
debugging.
Testing catches and reports bugs.
Testing reduces the probability of undiscovered bugs remaining in the software
Testing is not a proof of correctness
Testing can be planned with allocation of effort and schedule, resources, also,
having criteria on when to stop testing.
Testing starts with known conditions like what to test, test input, expected output and
uses test procedures.

Page | 20

Software Testing
Software
Testing
Testing shows that bugs are present in a pro gram, but cannot prove that there are no
bugs
There is no need to know design to carry-out testing
Good testing is done by an outsider that is other than the team who develops the code
Test automation in order to store and execute test cases can be done
Debugging is the process of analyzing causes behind the bugs and removing them
Debugging starts with a list of reported bugs with unknown initial conditions.
In debugging it is not possible to plan and estimate schedule and effort for debugging
Debugging is a problem solving involving deduction

Detailed design knowledge is of great help in good debugging


Debugging is done by the development team and hence is an insiders work
Automation of debugging is not in place

The figure shows a test/debug cycle:


Execute the program on an empty input sequence.
Test the program for robustness against erroneous inputs such as Rtyped in as the request
character.
All failures of the test program should be recorded in a suitable file using the Company
Failure Report Form..

Page | 21

Software Testing
Software
Testing

7. Software Testing Life Cycle:


Software testing life cycle identifies what test activities to carry out and when (what is The
best time)to accomplish those test activities. Even though testing differs between
organizations, there is a testing life cycle.
Software Testing Life Cycle consists of six (generic) phases:

Test Planning,
Test Analysis,
Test Design,
Construction and verification,
Page | 22

Software Testing
Software
Testing

Testing Cycles,
Final Testing and Implementation and Post Implementation. Software testing has its own life
cycle that intersects with every stage of the SDLC.
The basic requirements in software testing life cycle is to control/deal with software testing
Manual, Automated and Performance.

Test Planning
This is the phase where Project Manager has to decide what things need to be tested, do I
have the appropriate budget etc. Naturally proper planning at this stage would greatly reduce
the risk of low quality software. This planning will be an ongoing process with no end point.
Activities at this stage would include preparation of high level test plan-(according to IEEE
test plan template The Software Test Plan (STP) is designed to prescribe the scope, approach,
resources, and schedule of all testing activities. The plan must identify the items to be tested,
the features to be tested, the types of testing to be performed, the personnel responsible for
testing, the resources and schedule required to complete testing, and the risks associated with
the plan.). Almost all of the activities done during this stage are included in this software test
plan and revolve around a test plan.

Test Analysis
Once test plan is made and decided upon, next step is to delve little more into the project
and decide what types of testing should be carried out at different stages of SDLC, do we
need or plan to automate, if yes then when the appropriate time to automate is, what type of
specific documentation I need for testing.
Proper and regular meetings should be held between testing teams, project managers, and
development teams, Business Analysts to check the progress of things which will give a fair
idea of the movement of the project and ensure the completeness of the test plan created in
the planning phase, which will further help in enhancing the right testing strategy created
earlier. We will start creating test case formats and test cases itself. In this stage we need to
develop Functional validation matrix based on Business Requirements to ensure that all
system requirements are covered by one or more test cases, identify which test cases to
automate, begin review of documentation, i.e. Functional Design, Business Requirements,
Product Specifications, Product Externals etc. We also have to define areas for Stress and
Performance testing.

Test Design
Test plans and cases which were developed in the analysis phase are revised. Functional
validation matrix is also revised and finalized. In this stage risk assessment criteria is
developed. If you have thought of automation then you have to select which test cases to
automate and begin writing scripts for them. Test data is prepared. Standards for unit testing
and pass / fail criteria are defined here. Schedule for testing is revised (if necessary) &
finalized and test environment is prepared.
Page | 23

Software Testing
Software
Testing
Construction and verification
In this phase we have to complete all the test plans, test cases, complete the scripting
of the automated test cases, Stress and Performance testing plans needs to be completed. We
have to support the development team in their unit testing phase. And obviously bug
reporting would be done as when the bugs are found. Integration tests are performed and
errors (if any) are reported.

Testing Cycles
In this phase we have to complete testing cycles until test cases are executed without errors or
a predefined condition is reached. Run test cases --> Report Bugs --> revise test cases (if
needed) --> add new test cases (if needed) -- > bug fixing --> retesting (test cycle 2, test
cycle 3.).
Final Testing and Implementation In this we have to execute remaining stress and
performance test cases, documentation for testing is completed / updated, provide and
complete different matrices for testing .Acceptance, load and recovery testing will also be
conducted and the application needs to be verified under production conditions.
Post Implementation
In this phase, the testing process is evaluated and lessons learnt from that testing process are
documented. Line of attack to prevent similar problems in future projects is identified.
Create plans to improve the processes. The recording of new errors and enhancements is an
ongoing process. Cleaning up of test environment is done and test machines are restored to
base lines in this stage.
Example for Test plan
A test cycle is often guided by a test plan

Page | 24

Software Testing
Software
Testing

CONCLUSION
Software Testing can be defined in simple words as Performing Verification and Validation
of the Software Product for its correctness and accuracy of working.
Software quality is a multidimensional quantity and is measurable.
To do this, we need to divide and measure software quality in terms of quality attributes:

Static Quality Attributes.


Dynamic Quality Attributes.
Testing Methods:
Static vs nonstatic.
Box approach(white box, grey box).
Testing level:
Unit Testing, Integrated testing, component interface testing, system testing, acceptance
testing.

Testing types:
Installation testing, compatibility testing, regression testing, alpha testing, beta testing,
security.

Software testing life cycle:


Test Planning,
Test Analysis,
Test Design,
Construction and verification,
Testing Cycles,

Thus ,we have studied aspects of Software Testing


In detail as deccribed in this report.

Page | 25

Software Testing
Software
Testing

REFERANCE
Website
www.Wikipedia.com
www.Softwareprojects.com
www.1000projects.com
www.Testingproblem.com

Books
Vipul prakesh
Tech-max publication

Page | 26

Das könnte Ihnen auch gefallen