Beruflich Dokumente
Kultur Dokumente
Software Testing
Introduction
Software testing is a critical element of software quality assurance and represents the
ultimate process to ensure the correctness of the product. The quality product always
enhances the customer confidence in using the product thereby increases the business
economics. In other words, a good quality product means zero defects, which is derived
from a better quality process in testing.
1. Product development
2. Project development
Product development is done assuming a wide range of customers and their needs. This
type of development involves customers from all domains and collecting requirements from
many different environments.
Project Development is done by focusing a particular customer's need, gathering data from
his environment and bringing out a valid set of information that will help as a pillar to
development process.
Testing is a necessary stage in the software life cycle: it gives the programmer and user
some sense of correctness, though never "proof of correctness. With effective testing
techniques, software is more easily debugged, less likely to "break," more "correct", and,
in summary, better.
Most development processes in the IT industry always seem to follow a tight schedule.
Often, these schedules adversely affect the testing process, resulting in step motherly
treatment meted out to the testing process. As a result, defects accumulate in the
application and are overlooked so as to meet deadlines. The developers convince
themselves that the overlooked errors can be rectified in subsequent releases.
Software
Testing
The definition of testing is not well understood. People use a totally incorrect definition
of the word testing, and that this is the primary cause for poor program testing.
Testing the product means adding value to it by raising the quality or reliability of the
product. Raising the reliability of the product means finding and removing errors. Hence
one should not test a product to show that it works; rather, one should start with the
assumption that the program contains errors and then test the program to find as many of
the errors as possible.
Definitions of Testing:
“Testing is the process of executing a program with the intent of finding errors ”
Or
“Testing is the process of evaluating a system by manual or automatic means and
verify that it satisfies specified requirements”
Or
"... the process of exercising or evaluating a system or system component by manual or
automated means to verify that it satisfies specified requirements or to identify
differences / between expected and actual results..."
2
Software
Testing
stories appeared in newspapers and on TV news. This problem later was found out, due
to non performance of software testing for all conditions.
Calling any and all software problems bugs may sound simple enough, but doing so
hasn’t really addressed the issue. To keep from running in circular definitions, there
needs to be a definitive description of what a bug is.
A software bug occurs when one or more of the following five rules is true:
1) The software doesn’t do something that the product specification says it
should do.
2) The software does something that the product specification says it shouldn’t
do.
3) The software does something that the product specification doesn’t mention.
4) The software doesn’t do something that the product specification doesn’t
mention but should.
5) The software is difficult to understand, hard to use, slow, or –in the software
tester’s eyes- will be viewed by the end user as just plain not right.
From the above Examples you have seen how nasty bugs can be and you know what is
the definition of a bug is, and you can think how costly they can be. So main goal of
tester is
As a software tester you shouldn’t be content at just finding bugs, you should think about
how to find them sooner in the development process, thus making them cheaper to fix.
“The goal of a Software Tester is to find bugs, and find them as early as possible”.
“The goal of a Software Tester is to find bugs, and find them as early as possible
and make sure they get fixed”
3
Software
Testing
Principle of
Testing
The main objective of testing is to find defects in requirements, design, documentation, and
code as early as possible. The test process should be such that the software product that will
be delivered to the customer is defect less. All Tests should be traceable to customer
requirements.
Test cases must be written for invalid and unexpected, as well as for valid and expected input
conditions. A necessary part of a test case is a definition of the expected output or result. A
good test case is one that has high probability of detecting an as-yet undiscovered error.
The probability of locating more errors in any one module is directly proportional to the
number of errors already found in that module.
4
Software
Testing
Let us look at the Traditional Software Development life cycle vs Presently or Mostly
commonly used life cycle.
Requirements Requirements
G
N
I
T
S
E
Design Design
T
Development Development
Testing Implementation
Implementation Maintenance
Maintenance
5
Software
Testing
In the above Fig A, the Testing Phase comes after the Development or coding is complete
and before the product is launched and goes into Maintenance phase. We have some
disadvantages using this model - cost of fixing errors will be high because we are not able
to find errors until coding is completed. If there is error at Requirements phase then all
phases should be changed. So, total cost becomes very high.
The Fig B shows the recommended Test Process involves testing in every phase of the
life cycle. During the Requirements phase, the emphasis is upon validation to determine
that the defined requirements meet the needs of the organization. During Design and
Development phases, the emphasis is on verification to ensure that the design and
program accomplish the defined requirements. During the Test and Installation phases,
the emphasis is on inspection to determine that the implemented system meets the system
specification. During the maintenance phases, the system will be re-tested to determine
that the changes work and that the unchanged portion continues to work.
The main objective of the requirement analysis is to prepare a document, which includes
all the client requirements. That is, the Software Requirement Specification (SRS)
document is the primary output of this phase. Proper requirements and specifications are
critical for having a successful project. Removing errors at this phase can reduce the cost
as much as errors found in the Design phase. And also you should verify the following
activities:
• Determine Verification Approach.
• Determine Adequacy of Requirements.
• Generate functional test data.
• Determine consistency of design with requirements.
Design phase
6
Software
Testing
plays a major role. For this the entry criteria are the requirement document that is SRS.
And the exit criteria will be HLD, projects standards, the functional design documents,
and the database design document.
Low – Level Design (LLD)
During the detailed phase, the view of the application developed during the high level
design is broken down into modules and programs. Logic design is done for every
program and then documented as program specifications. For every program, a unit test
plan is created.
The entry criteria for this will be the HLD document. And the exit criteria will the
program specification and unit test plan (LLD).
Development Phase
This is the phase where actually coding starts. After the preparation of HLD and LLD,
the developers know what is their role and according to the specifications they develop
the project. This stage produces the source code, executables, and database. The output of
this phase is the subject to subsequent testing and validation.
The inputs for this phase are the physical database design document, project standards,
program specification, unit test plan, program skeletons, and utilities tools. The output
will be test data, source data, executables, and code reviews.
Testing phase
This phase is intended to find defects that can be exposed only by testing the entire
system. This can be done by Static Testing or Dynamic Testing. Static testing means
testing the product, which is not executing, we do it by examining and conducting the
reviews. Dynamic testing is what you would normally think of testing. We test the
executing part of the project.
A series of different tests are done to verify that all system elements have been properly
integrated and the system performs all its functions.
Note that the system test planning can occur before coding is completed. Indeed, it is
often done in parallel with coding. The input for this is requirements specification
document, and the output are the system test plan and test result.
7
Software
Testing
Maintenance phase
This phase is for all modifications, which is not meeting the customer requirements or
any thing to append to the present system. All types of corrections for the project or
product take place in this phase. The cost of risk will be very high in this phase. This is
the last phase of software development life cycle. The input to this will be project to be
corrected and the output will be modified version of the project.
Software
Development
Lifecycle Models
The process used to create a software product from its initial conception to its public
release is known as the software development lifecycle model.
There are many different methods that can be used for developing software, and no
model is necessarily the best for a particular project. There are four frequently used
models:
8
Software
Testing
The beauty of this model is that it’s simple. There is little planning, scheduling, or
Formal development process. All the effort is spent developing the software and writing
the code. It’s and ideal process if the product requirements aren’t well understood and the
final release date is flexible. It’s also important to have flexible customers, too, because
they won’t know what they’re getting until the very end.
Waterfall Model
A project using waterfall model moves down a series of steps starting from an initial idea
to a final product. At the end of each step, the project team holds a review to determine if
they’re ready to move to the next step. If the project isn’t ready to progress, it stays at that
level until it’s ready. Each phase requires well-defined information, utilizes well-defined
process, and results in well-defined outputs. Resources are required to complete the
process in each phase and each phase is accomplished through the application of explicit
methods, tools and techniques.
The Waterfall model is also called the Phased model because of the sequential move from
one phase to another, the implication being that systems cascade from one level to the next
in smooth progression. It has the following seven phases of development:
9
Software
Testing
Requirement phase
Analysis phase
Design phase
Development phase
Testing phase
Implementation phase
Maintenance phase
Prototype model
The Prototyping model, also known as the Evolutionary model, came into SDLC because
of certain failures in the first version of application software. A failure in the first version
of an application inevitably leads to need for redoing it. To avoid failure of SDLC, the
concept of Prototyping is used. The basic idea of Prototyping is that instead of fixing
requirements before the design and coding can begin, a prototype is to understand the
requirements. The prototype is built using known requirements. By viewing or using the
prototype, the user can actually feel how the system will work.
10
Software
Testing
Prototyping Process
• The developer and end user work together to define the specifications of
the critical parts of the system.
• The developer constructs a working model of the system.
• The resulting prototype is a partial representation of the system.
• The prototype is demonstrated to the user.
• The user identifies problems and redefines the requirements.
• The designer uses the validated requirements as a basis for designing the
actual or production software
11
Software
Testing
Spiral model
The traditional software process models don't deal with the risks that may be faced during
project development. One of the major causes of project failure in the past has been
negligence of project risks. Due to this, nobody was prepared when something unforeseen
happened. Barry Boehm recognized this and tried to incorporate the factor, project risk,
into a life cycle model. The result is the Spiral model, which was first presented in 1986.
The new model aims at incorporating the strengths and avoiding the different of the other
models by shifting the management emphasis to risk evaluation and resolution.
Each phase in the spiral model is split into four sectors of major activities.
Objective setting:
This activity involves specifying the project and process objectives in terms of their
functionality and performance.
Risk analysis:
It involves identifying and analyzing alternative solutions. It also involves identifying the
risks that may be faced during project development.
Engineering:
Customer evaluation:
During this phase, the customer evaluates the product for any errors and modifications.
Verification &
validation
12
Software
Testing
Verification and validation are often used interchangeably but have different definitions.
These differences are important to software testing.
Verification can be conducted through Reviews. Quality reviews provides visibility into
the development process throughout the software development life cycle, and help teams
determine whether to continue development activity at various checkpoints or milestones
in the process. They are conducted to identify defects in a product early in the life cycle.
Types of Reviews
• In-process Reviews :-
They look at the product during a specific time period of life cycle, such
as during the design activity. They are usually limited to a segment of a
project, with the goal of identifying defects as work progresses, rather
than at the close of a phase or even later, when they are more costly to
correct.
Classes of Reviews
In this type of review generally a one-to one meeting between the author
of a work product and a peer, initiated as a request for input regarding a
particular artifact or problem. There is no agenda, and results are not
13
Software
Testing
Project
Management
Project management is Organizing, Planning and Scheduling software projects. It is
concerned with activities involved in ensuring that software is delivered on schedule and
14
Software
Testing
in accordance with the requirements of the organization developing and procuring the
software. Project management is needed because software development is always subject
to budget and schedule constraints that are set by the organization developing the
software.
• Project planning.
• Project scheduling.
• Iterative Code/Test/Release Phases
• Production Phase
• Post Mortem
Project planning
Project scheduling
This activity involves splitting project into tasks and estimate time and resources required
to complete each task. Organize tasks concurrently to make optional use of workforce.
Minimize task dependencies to avoid delays caused by one task waiting for another to
complete. Project Manager has to take into consideration various aspects like scheduling,
estimating manpower resources, so that the cost of developing a solution is within the
limits. Project Manager also has to allow for contingency in planning.
After the planning and design phases, the client and development team has to agree on
the feature set and the timeframe in which the product will Project
be delivered. This includes
Management
15
Software
Testing
iterative releases of the product as to let the client see fully implemented functionality
early and to allow the developers to discover performance and architectural issues early
in the development. Each iterative release is treated as if the product were going to
production. Full testing and user acceptance is performed for each iterative release.
Experience shows that one should space iterations at least 2 – 3 months a part. If
iterations are closer than that, more time will be spent on convergence and the project
timeframe expands. During this phase, code reviews must be done weekly to ensure that
the developers are delivering to specification and all source code is put under source
control. Also, full installation routines are to be used for each iterative release, as it
would be done in production.
Deliverables
• Triage
• Weekly Status with Project Plan and Budget Analysis
• Risk Assessment
• System Documentation
• User Documentation (if needed)
• Test Signoff for each iteration
• Customer Signoff for each iteration
Production Phase
Once all iterations are complete, the final product is presented to the client for a final
signoff. Since the client has been involved in all iterations, this phase should go very
smoothly.
Deliverables
The post mortem phase allows to step back and review the things that went well and the
things that need improvement. Post mortem reviews cover processes that need
adjustment, highlight the most effective processes and provide action items that will
improve future projects.
16
Software
Testing
To conduct a post mortem review, announce the meeting at least a week in advance so
that everyone has time to reflect on the project issues they faced. Everyone has to be
asked to come to the meeting with the following:
During the meeting, collection of the information listed above is required. As each
person offers their input, categorize the input so that all comments are collected. This
will allow one to see how many people had the same observations during the project. At
the end of observation review, a list of the items will be available that were mentioned
most often. The list of items allowing the team to prioritize the importance of each item
has to be perused. This will allow drawing a distinction of the most important items.
Finally, a list of action items has to be made that will be used to improve the process and
publish the results. When the next project begins, everyone on the team should review
the Post Mortem Report from the prior release as to improve the next release.
Quality Management
The project quality management knowledge area is comprised of the set of processes that
ensure the result of a project meets the needs for which the project was executed.
Processes such as quality planning, assurance, and control are included in this area. Each
process has a set of input and a set of output. Each process also has a set of tools and
techniques that are used to turn input into output.
Definition of Quality:
17
Software
Testing
Some goals of quality programs include:
• Fitness for use. (Is the product or service capable of being used?)
• Fitness for purpose. (Does the product or service meet its intended
purpose?)
• Customer satisfaction. (Does the product or service meet the
customer's expectations?)
Quality Planning:
The process of identifying which quality standards is relevant to the project and
determining how to satisfy them.
• Input includes: Quality policy, scope statement, product description, standards
and regulations, and other process Output.
• Methods used: benefit / cost analysis, benchmarking, flowcharting, and design of
experiments.
• Output includes: Quality Management Plan, operational definitions, checklists,
and Input to other processes.
Quality Assurance
The process of evaluating overall projects performance on a regular basis to provide
confidence that the project will satisfy the relevant quality standards.
• Input includes: Quality Management Plan, results of quality control
measurements, and operational definitions.
• Methods used: quality planning tools and techniques and quality audits.
• Output includes: quality improvement.
Quality Control
The process of monitoring specific project results to determine if they comply with
relevant quality standards and identifying ways to eliminate causes of unsatisfactory
performance.
• Input includes: work results, Quality Management Plan, operational definitions,
and checklists.
• Methods used include: inspection, control charts, pareto charts, statistical
sampling, flowcharting, and trend analysis.
• Output includes: quality improvements, acceptance decisions, rework, completed
checklists, and process adjustments.
Quality Policy
18
Software
Testing
Quality Concepts
• Zero Defects
• The Customer is the Next Person in the Process
• Do the Right Thing Right the First Time (DTRTRTFT)
• Continuous Improvement Process (CIP) (From Japanese word, Kaizen)
• Pareto Chart
1. Ranks defects in order of frequency of occurrence to depict 100%
of the defects. (Displayed as a histogram)
2. Defects with most frequent occurrence should be targeted for
corrective action.
3. 80-20 rule: 80% of problems are found in 20% of the work.
4. Does not account for severity of the defects
• Cause and Effect Diagrams (fishbone diagrams or Ishikawa diagrams)
1. Analyzes the Input to a process to identify the causes of errors.
2. Generally consists of 8 major Input to a quality process to permit
the characterization of each input.
• Histograms
1. Shows frequency of occurrence of items within a range of activity.
2. Can be used to organize data collected for measurements done on a
product or process.
• Scatter diagrams
1. Used to determine the relationship between two or more pieces of
corresponding data.
19
Software
Testing
2. The data are plotted on an "X-Y" chart to determine correlation
(highly positive, positive, no correlation, negative, and highly
negative)
Risk Management
Risk management must be an integral part of any project. Everything does not always
happen as planned. Project risk management contains the processes for identifying,
analyzing, and responding to project risk. Each process has a set of input and a set of
output. Each process also has a set of tools and techniques that are used to turn the input
into output
Used to decide how to approach and plan the risk management activities for a project.
• Input includes: The project charter, risk management policies, and WBS all serve
as input to this process
• Methods used: Many planning meeting will be held in order to generate the risk
management plan
• Output includes: The major output is the risk management plan, which does not
include the response to specific risks. However, it does include methodology to be
used, budgeting, timing, and other information
Risk Identification
Determining which risks might affect the project and documenting their
characteristics
20
Software
Testing
• Input includes: The risk management plan is used as input to this process
• Methods used: Documentation reviews should be performed in this process.
Diagramming techniques can also be used
• Output includes: Risk and risk symptoms are identified as part of this process.
There are generally two types of risks. They are business risks that are risks of
gain or loss. Then there are pure risks that represent only a risk of loss. Pure risks
are also known as insurable risks
Risk Analysis
Used to monitor risks, identify new risks, execute risk reduction plans, and
evaluate their effectiveness throughout the project life cycle.
• Input includes: Input to this process includes the risk management plan, risk
identification and analysis, and scope changes
• Methods used: Audits should be used in this process to ensure that risks are still
risks as well as discover other conditions that may arise.
• Output includes: Output includes work-around plans, corrective action, project
change requests, as well as other items
21
Software
Testing
• A Risk Quantification Tool
• EMV is the product of the risk event probability and the risk event value
• Risk Event Probability: An estimate of the probability that a given risk event
will occur
Decision Trees
A diagram that depicts key interactions among decisions and associated chance events as
understood by the decision maker. Can be used in conjunction with EMV since risk
events can occur individually or in groups and in parallel or in sequence.
Configuration
Management
22
Software
Testing
Release is the means of distributing the software outside the development team. Releases
must incorporate changes forced on the system by errors discovered by users and by
hardware changes. They must also incorporate new system functionality.
Change management
23
Software
Testing
Software systems are subject to continual change requests from users, from developers,
from market forces. Change management is concerned with keeping, managing of
changes and ensuring that they are implemented in the most cost-effective way.
A group, who decide, whether or not they are cost-effective from a strategic,
organizational and technical viewpoint, should review the changes. This group is
sometimes called a change control board and includes members from project team.
10
Types of
Software
Testing
Testing
Static Dynamic
24
Software
Testing
Structural Functional
Testing Testing
Static Testing
Static testing refers to testing something that’s not running. It is examining and reviewing
it. The specification is a document and not an executing program, so it’s considered as
static. It’s also something that was created using written or graphical documents or a
combination of both.
Dynamic Testing
Structural tests verify the structure of the software itself and require complete access to
the source code. This is known as ‘white box’ testing because you see into the internal
workings of the code.
White-box tests make sure that the software structure itself contributes to proper and
efficient program execution. Complicated loop structures, common data areas, 100,000
lines of spaghetti code and nests of ifs are evil. Well-designed control structures, sub-
routines and reusable modular programs are good.
White-box testing strength is also its weakness. The code needs to be examined by
highly skilled technicians. That means that tools and skills are highly specialized to the
25
Software
Testing
particular language and environment. Also, large or distributed system execution goes
beyond one program, so a correct procedure might call another program that provides bad
data. In large systems, it is the execution path as defined by the program calls, their
input and output and the structure of common files that is important. This gets into a
hybrid kind of testing that is often employed in intermediate or integration stages of
testing.
Functional tests examine the behavior of software as evidenced by its outputs without
reference to internal functions. Hence it is also called ‘black box’ testing. If the program
consistently provides the desired features with acceptable performance, then specific
source code features are irrelevant. It's a pragmatic and down-to-earth assessment of
software.
Functional or Black box tests better address the modern programming paradigm. As
object-oriented programming, automatic code generation and code re-use becomes more
prevalent, analysis of source code itself becomes less important and functional tests
become more important. Black box tests also better attack the quality target. Since only
the people paying for an application can determine if it meets their needs, it is an
advantage to create the quality criteria from this point of view from the beginning.
Black box tests have a basis in the scientific method. Like the process of science, Black
box tests must have a hypothesis (specifications), a defined method or procedure (test
plan), reproducible components (test data), and a standard notation to record the results.
One can re-run black box tests after a change to make sure the change only produced
intended results with no inadvertent effects.
11
Testing levels
There are several types of testing in a comprehensive software test process, many of
which occur simultaneously.
• Unit Testing
• Integration Testing
• System Testing
• Performance / Stress Test
26
Software
Testing
• Regression Test
• Quality Assurance Test
• User Acceptance Test and Installation Test
Unit Testing
Testing each module individually is called Unit Testing. This follows a White-Box
testing. In some organizations, a peer review panel performs the design and/or code
inspections. Unit or component tests usually involve some combination of structural and
functional tests by programmers in their own systems. Component tests often require
building some kind of supporting framework that allows components to execute.
Integration testing
The individual components are combined with other components to make sure that
necessary communications, links and data sharing occur properly. It is not truly system
testing because the components are not implemented in the operating environment. The
integration phase requires more planning and some reasonable sub-set of production-type
data. Larger systems often require several integration steps.
• all-at-once
• bottom-up
• top-down
27
Software
Testing
itself to more structured organizations that plan out the entire test process.
Although interface errors are found earlier, errors in critical low-level
modules can be found later than you would like.
System Testing
The system test phase begins once modules are integrated enough to perform tests in a
whole system environment. System testing can occur in parallel with integration test,
especially with the top-down method.
A drawback of performance testing is it confirms the system can handle heavy loads, but
cannot so easily determine if the system is producing the correct information.
Regression Testing
Regression tests confirm that implementation of changes have not adversely affected
other functions. Regression testing is a type of test as opposed to a phase in testing.
Regression tests apply at all phases whenever a change is made.
Some organizations maintain a Quality Group that provides a different point of view,
uses a different set of tests, and applies the tests in a different, more complete test
environment. The group might look to see that organization standards have been
followed in the specification, coding and documentation of the software. They might
check to see that the original requirement is documented, verify that the software
properly implements the required functions, and see that everything is ready for the users
to take a crack at it.
28
Software
Testing
Traditionally, this is where the users ‘get their first crack’ at the software. Unfortunately,
by this time, it's usually too late. If the users have not seen prototypes, been involved
with the design, and understood the evolution of the system, they are inevitably going to
be unhappy with the result. If one can perform every test as user acceptance tests, there
is much better chance of a successful project.
12
Types of Testing
Techniques
29
Software
Testing
Equivalence Partitioning:
Equivalence partitioning is the process of methodically reducing the huge(infinite)set of
possible test cases into a much smaller, but still equally effective set. An Equivalence
class is a subset of data that is representative of a larger class. Equivalence partitioning is
a technique for testing equivalence classes rather than undertaking exhaustive testing of
each value of the larger class, when looking for equivalence partitions, think about ways
to group similar inputs, similar outputs, and similar operations of the software. These
groups are the equivalence partitions.
For example
A program that edits credit limits within a given range ($20,000-$50,000) would
have three equivalence classes:
30
Software
Testing
consist of developing test cases and data that focus on the input and output boundaries of
a given function. In same credit limit example, boundary analysis would test:
Low boundary plus or minus one ($19,999 and $20,001)
On the boundary ($20,000 and $50,000)
Upper boundary plus or minus one ($49,999 and $50,001)
Error Guessing
This is based on the theory that test cases can be developed based upon the intuition and
experience of the Test-Engineer.
Example: In the example of date, where one of the inputs is the date, a test may try
February 29, 2000 or 9.9.99
Incremental testing
Incremental testing is a disciplined method of testing the interfaces between unit-tested
programs as well as between system components. It involves adding unit-tested programs
to a given module or component one by one, and testing each result and combination.
There are two types of incremental testing:
Top-down: - This begins testing from top of the module hierarchy and work down
to the bottom using interim stubs to simulate lower interfacing modules or
programs. Modules are added in descending hierarchical order.
Bottom-up: - This begins testing from the bottom of the hierarchy and works up to
the top. Modules are added in ascending hierarchical order. Bottom-up testing
requires the development of driver modules, which provide the test input, call the
module or program being tested, and display test output.
There are procedures and constraints associated with each of these methods, although
bottom-up testing is often thought to be easier to use. Drivers are often easier to create
than stubs, and can serve multiple purposes. Output is also often easier to examine in
bottom-up testing, as the output always comes from the module directly above the
module under test.
Thread testing
This test technique, which is often used during early integration testing, demonstrates key
functional capabilities by testing a string of units that accomplish a specific function in
the application. Thread testing and incremental testing are usually utilized together. For
example, units can undergo incremental until enough units are integrated and a single
business function can be performed, threading through the integrated components.
31
Software
Testing
13
Testing Life
Cycle
Defect Tracking
The following are the important topics, which helps in preparation of Test plan.
• High-Level Expectations
The first topics to address in the planning process are the ones that define
the test team’s high-level expectations. They are fundamental topics that
must be agreed to, by everyone on the project team, but they are often
overlooked. They might be considered “too obvious” and assumed to be
understood by everyone, but a good tester knows never to assume anything.
32
Software
Testing
33
Software
Testing
The test case design specification refines the test approach and identifies the features to
be covered by the design and its associated tests. It also identifies the test cases and test
procedures, if any, required to accomplish the testing and specifics the feature pass or fail
criteria. The purpose of the test design specification is to organize and describe the
testing needs to be performed on a specific feature. The following topics address this
purpose and should be part of the test design specification that is created:
34
Software
Testing
35
Software
Testing
14
Defect Tracking
A defect can be defined in one or two ways. From the producer's viewpoint, a defect is a
deviation from specifications, whether missing, wrong, etc. From the Customer's
viewpoint, a defect is any that causes customer dissatisfaction, whether in the
requirements or not, this is known as "fit for use". It is critical that defects identified at
each stage of the project life cycle be tracked to resolution. Defects are recorded for
following major purposes:
Most project teams utilize some type of tool to support the defect tracking process. This
tool could be as simple as a white board or a table created and maintained in a word
processor or one of the more robust tools available today, on the market, such as
Mercury's Test Director etc. Tools marketed for this purpose usually come with some
number of customizable fields for tracking project specific data in addition to the basics.
They also provide advanced features such as standard and ad-hoc reporting, e-mail
notification to developers and/or testers when a problem is assigned to them, and
graphing capabilities.
At a minimum, the tool selected should support the recording and communication
significant information about a defect. For example, a defect log could include:
• Defect ID number
• Descriptive defect name and type
• Source of defect -test case or other source
• Defect severity
• Defect priority
36
Software
Testing
• Defect status (e.g. open, fixed, closed, user error, design, and so on)
-more robust tools provide a status history for the defect
• Date and time tracking for either the most recent status change, or for
each change in the status history
• Detailed description, including the steps necessary to reproduce the
defect
• Component or program where defect was found
• Screen prints, logs, etc. that will aid the developer in resolution process
• Stage of origination
• Person assigned to research and/or correct the defect
The severity of a defect should be assigned objectively by the test team based on
predefined severity descriptions. For example a "severity one" defects maybe defined as
one that causes data corruption, a system crash, security violations, etc. In large project, it
may also be necessary to assign a priority to the defect, which determines the order in
which defects should be fixed. The priority assigned to a defect is usually more
subjective based upon input from users regarding which defects are most important to
them, and therefore should be fixed first.
It is recommended that severity levels be defined at the start of the project so that they
intently assigned and understood by the team. This foresight can help test teams avoid the
common disagreements with development teams about the criticality of a defect.
37
Software
Testing
• Defect prevention
• Deliverable base-lining
• Defect discovery/defect naming
• Defect resolution
• Process improvement
• Management reporting
Management Reporting
15
Test
Reports
38
Software
Testing
A final test report should be prepared at the conclusion of each test activity. This might
include
• Individual Project Test Report (e.g., a single software system)
• Integration Test Report
• System Test Report
• Acceptance Test Report
The test reports are designed to document the results of testing as defined in the test plan.
Without a well-developed test plan, which has been executed in accordance with its
criteria, it is difficult to develop a meaningful test report. It is designed to accomplish
three objectives:
• Define the scope of testing - normally a brief recap of the test plan;
• Present the results of testing; and
• Draw conclusions and make recommendations based on those results
The test report may be a combination of electronic data and hard copy. For example, if
the function test matrix is maintained electronically, there is no reason to print that, as the
paper report will summarize that data, draws the appropriate conclusions, and present
recommendations.
The test report has one immediate and three long-term purposes. The immediate purpose
is to provide information to the customers of the software system so that they can
determine whether the system is ready for production: and if so, to assess the potential
consequences and initiate appropriate actions to minimize those consequences.
The first of the three long-term uses is for the project to trace problems in the event the
application malfunctions in production. Knowing which functions have been correctly
tested and which ones still contain defects can assist in taking corrective action.
The second long-term purpose is to use the data to analyze the rework process for making
changes to prevent defects from occurring in the future. Accumulating the results of
many test reports to identify which components of the rework process are detect-prone
does this. These defect-prone components identify tasks/steps that, if improved, could
eliminate or minimize the occurrence of high-frequency defects. The third long-term
purpose is to show what was accomplished.
39
Software
Testing
Integration testing tests the interfaces between individual projects. A good test plan will
identify the interfaces and institute test conditions that will validate interfaces. Given this,
the interface report follows the same format as the individual Project Test report, except
that the conditions tested are the interfaces.
• Fully Tested
• Tested With Open Defects
• Not Tested
40
Software
Testing
This report will show the actual plan to have all functions working verses the current
status of functions working. An ideal format could be a line graph.
16
Software
Metric
41
Software
Testing
• Process Metric
• Product Metric
Process Metric a metric used to measure the characteristic of the methods, techniques
and tools employed in developing, implementing and maintaining the software system.
Product Metric a metric used to measure the characteristic of the documentation and
code
The metrics for the test process would include status of test activities against the
plan, test coverage achieved so far, among others. An important metric is the number of
defects found in internal testing compared to the defects found in customer tests, which
indicate the effectiveness of the test process itself.
Test Metrics
42
Software
Testing
Path Tested = Number of Path Tested Total Number of Paths
Test Cost
=
No of Defects located in the Testing
Production Defect
Test Automation
17
Conversion Testing
Specifically designed to validate the effectiveness of the conversion process. This test
may be conducted jointly by developers and testers during integration testing, or at the
43
Software
Testing
start of system testing, since system testing must be conducted with the converted data.
Field -to -Field mapping and data translation is validated and, if a foil copy of production
data will be used in the test.
Vendor Validation Testing
Verifies that the functionality of contracted or third party software meets the
organization's requirements, prior to accepting it and installing it into a production
environment. This test can be conducted jointly by the software vendor and the test team,
and focuses on ensuring that all requested functionality has been delivered.
Conducted to validate the application, database, and network, they may handle projected
volumes of users and data effectively. The test is conducted jointly by developers, testers,
DBA's and network associates after the system testing. During the test, the complete
system is subjected to environmental conditions that defer expectations to answer
question such as:
Performance Testing
Usually conducted in parallel with stress and load testing in order to measure
performance against specified service-level objectives under various conditions. For
instance, one may need to ensure that batch processing will complete within the allocated
amount of time, or that on-line response times meet performance requirements.
Recovery Testing
Evaluates the contingency features built into the application for handling inter and for
returning to specific points in the application processing. Any restoration, and restart
capabilities are also tested here. The test team may conduct this test during system test or
by another team specifically gathered for this purpose.
Configuration Testing
In the IT Industry, a large percentage of new applications are either client/server or web-
based, validating that they will run on the various combinations of hardware and
software. For instance, configuration testing for an web-based application would
incorporate versions and releases of operating systems, internet browsers, modem speeds,
and various off the shelf applications that might be integrated (e.g. e-mail application)
44
Software
Testing
18
Test Standards
External Standards- Familiarity with and adoption of industry test standards from
Organizations.
Internal Standards-Development and enforcement of the test standards that testers
must meet
IEEE
45
Software
Testing
6.1008-1987 (R1993) IEEE Standard for Software Unit Testing (ANSI)
Other Standards:
• DoD-Department of Defense
Internal Standards
• Simplifies communication
• Promotes consistency and uniformity
• Eliminates the need to invent yet another solution to the same problem
• Provides continuity
• Presents a way of preserving proven practices
• Supplies benchmarks and framework
46
Software
Testing
19
Web Testing
Introduction
• Usability
• Functionality
• Server side Interface
• Client side Compatibility
• Performance
• Security
Usability
One of the reasons the web browser is being used as the front end to applications is the
ease of use. Users who have been on the web before will probably know how to navigate
a well-built web site. While 7012 are concentrating on tin's portion of testing it is
important to verify that the application is easy to use. Many will believe that this is the
least important area to test, the site should be better and easy to use. Even if the web site
is simple, there will always be some one who needs some clarification. Additionally, the
documentation needs also to be verified, so that the instructions are correct.
The following are the some of the things to be checked for easy navigation through
website:
• Site map or navigational bar
Does the site have a map? Sometimes power users know exactly where they want to go
and don't want to go through lengthy introductions. Or new users get lost easily. Either
way a site map and/or ever-present navigational map can guide the user. The site map
needs to be verified for its correctness. Does each link on the map actually exist? Are
there links on the site that are not represented on the map? Is the navigational bar
47
Software
Testing
present on every screen? Is it consistent? Does each link work on each page? Is it
organized in an intuitive manner?
• Content
To a developer, functionality comes before wording. Anyone can slap together some
fancy mission statement later, but while they are developing, they just need some filler
to verify alignment and layout. Unfortunately, text produce like this may sneak through
the cracks. It is important to check with the public relations department on the exact
wording of the content. Otherwise, the company can get into a lot of trouble, legally.
One has to make sure the site looks professional. Overuse of bold text, big fonts and
blinking can turn away a customer quickly. It might be a good idea to consult a graphic
designer to look over the site during User Acceptance Testing. Finally, one has to make
sure that any time a web reference is given, that it is hyper linked. Plenty of sites ask
them to email them at a specific address or to download a browser from an address. But
if the user can't click on it, they are going to be annoyed.
• Colors/backgrounds
Ever since the web became popular, everyone thinks they are a graphic designer.
Unfortunately, some developers are more interested in their new backgrounds, than
ease of use. Sites will have yellow text on a purple picture of a fractal pattern. This may
seem "pretty neat", but it's not easy to use. Usually, the best idea is to use little or no
background. If there is a background, it might be a single color on the left side of the
page, containing the navigational bar. But, patterns and pictures distract the user.
• Images
Whether it's a screen grab or a little icon that points the way, a picture is worth a
thousand words. Sometimes, the best way to tell the user something is to simply show
them. However, bandwidth is precious to the client and the server, so you need to
conserve memory usage. Do all the images and value to each page, or do they simply
waste bandwidth? Can a different file type (.GIF, JPG) be used for 30k less? In general,
one doesn't want large pictures on the front page, since most users who abandon a load
will do it on the front page. If the front page is available quickly, it will increase the
chance they will stay.
• Tables
48
Software
Testing
It has to be verified that tables are setup properly. Does the user constantly have to
scroll right to see the price of the item? Would it be more efficient to put the price
closer to the left and put miniscule details to the right? Are the columns wide enough or
does every row have to wrap around? Are certain rows excessively high because of one
entry? These are some of the points to be taken care of.
• Wrap-around
Finally, it has to be verified whether the wrap-around occurs properly. If the text refers
to a picture on the right, make sure the picture is on the right. Make sure that widow
and orphan sentences and paragraphs don't layout in an awkward manner because of
pictures.
Functionality
The functionality of the web site is why the company hired a developer and not just an
artist. This is the part that interfaces with the server and actually "does stuff".
• Links
A link is the vehicle that gets the user from page to page. Two things has to be verified
for each link - that the link which brings to the page it said it would and that the pages it
is trying to link, exist. It may sound a little silly but many of the web sites exist with
internal broken links.
• Forms
When a user submits information through a form it needs to work properly. The submit
button needs to work If the form is for an online registration, the user should be given
login information (that works) after successful completion. If the form gathers shipping
information, it should be handled properly and the customer should receive their
package. In order to test this, you need to verify that the server stores the information
properly and that systems down the line can interpret and use that information.
• Data verification
If the system verifies user input according to business rules, then that needs to work
properly. For example, a State field may be checked against a list of valid values. If this
is the case, you need to verify that the list is complete and that the program actually
calls the list properly (add a bogus value to the list and make sure the system accepts
it).
49
Software
Testing
• Cookies
Most users only like the kind with sugar, but developers love web cookies. If the
system uses them, you need to check them. If they store login information, make sure
the cookies work and make sure it's encrypted in the cookie file. If the cookie is used
for statistics, verify that totals are being counted properly. And you'll probably want to
make sure those cookies are encrypted too, otherwise people can edit their cookies and
skew your statistics.
Most importantly, one may want to verify the application specific functional
requirements, Try to perform all functions a user would: place an order, change an
order, cancel an order, check the status of the order, change shipping information
before an order is shipped, pay online, ad naseum. This is why users will show up on
the developer’s doorstep, so one need to make sure that he can do what is advertised.
Many times, a web site is not an island. The site will call external servers for additional
data, verification of data or fulfillment of orders.
• Server interface
The first interface one should test is the interface between the browser and the server,
transactions should be attempted, then the server logs viewed and verified that what is
seen in the browser is actually happening on the server. It's also a good idea to run
queries on the database to make sure the transaction data is being stored properly.
• External interfaces
Some web systems have external interfaces. For example, a merchant might verify
credit card transactions real-time in order to reduce fraud. Several test transactions may
have to be sent using the web interface. Try credit cards that are valid, invalid, and
stolen. If the merchant only takes Visa and MasterCard, try using a Discover card. (A
simple client-side script can check 3 for American Express, 4 for Visa, 5 for
MasterCard, or 6 for Discover, before the transaction is sent.) Basically, it has to be
ensured that the software can handle every possible message returned by the external
server.
50
Software
Testing
• Error handling
One of the areas left untested most often is interface error handling. Usually we try to
make sure our system can handle all our errors, but we never plan for the other systems'
errors or for the unexpected. Try leaving the site mid-transaction - what happens? Does
the order complete anyway? Try losing the Internet connection from the user to the
server. Try losing the connection from the server to the credit card verification server.
Is there proper error handling for all these situations? Are charges still made to credit
cards? Is the interruption is not user initiated, does the order get stored so customer
service reps can call back if the user doesn't come back to the site?
It has to be verified that the application can work on the machines your customers will be
using. If the product is going to the web for the world to use, every operating system,
browser, video setting and modem speed has to be tried with various combinations.
• Operating systems
Does the site work for both MAC and IBM Compatibles? Some fonts are not available
on both systems, so make sure that secondary fonts are selected. Make sure that the site
doesn't use plug-ins only available for one OS, if the users using both.
• Browsers
Does the site work with Netscape? Internet Explorer? Linux? Some HTML commands
or scripts only work for certain browsers. Make sure there are alternate tags for images,
in case someone is using a text browser. If SSL security is used, it has to be checked
whether browsers 3.0 and higher, but it has to be verified that there is a message for
those using older browsers.
• Video settings
Does the layout still look good on 640x400 or 600x800? Are fonts too small to read?
Are they too big? Does all the text and graphic alignment still work?
• Modem/connection speeds
51
Software
Testing
Does it take 10 minutes to load a page with a 28.8 modem, but whether it is tested after
hooking up to high-speed connections? Users will expect long download times when
they are grabbing documents or demos, but not on the front page. It has to be ensured
that the images aren't too large. Make sure that marketing don't put 50k of font size -6
keywords for search engines.
• Printers
Users like to print. The concept behind the web should save paper and reduce printing,
but most people would rather read on paper than on the screen. So, you need to verify
that the pages print properly. Sometimes images and text align on the screen differently
than on the printed page. It has to be verified that order confirmation screens can be
printed properly.
• Combinations
A different combination has to be tried. Maybe 600x800 looks good on the MAC but
not on the IBM. Maybe IBM with Netscape works, but not with Linux. If the web site
will be used internally it might make testing a little easier. If the company has an
official web browser choke, then it has to be verified that it works for that browser. If
everyone has a high-speed connection, load times need not be checked. (But it has to be
kept in mind, some people may dial in from home.) With internal applications, the
development team can make disclaimers about system requirements and only support
those systems setups. But, ideally, the site should work on all machines without limit
growth and changes in the future.
Performance Testing
It need to be verified that the system can handle a large number of users at the same time,
a large amount of data from each user, and a long period of continuous use.
Accessibility is extremely important to users. If they get a "busy signal", they hang up
and call the competition. Not only must the system be checked so the customers can gain
access, but many times hackers will attempt to gain access to a system by overloading it,
For the sake of security, the system needs to know what to do when it's overloaded; not
simply blow up.
52
Software
Testing
If the site just put up the results of a national lottery, it will be better to handle millions
of users right after the winning numbers are posted. A load test tool would be able
simulate concurrent users accessing the site at the same time.
Most customers may only order 1-5 books from your new online bookstore, but what if
a university bookstore decides to order 5000 copies of Intro to Psychology? Or what if
one user wants to send a gift to larger number of his/her friends for Christmas (separate
mailing addresses for each, of course.) Can the system handle large amounts of from a
single user?
If the site is intended to take orders for specific occasion, then it will be better to handle
well before the occasion. If the site offers web-based email, it will be better to run
months or even years, without downtimes. It may probably be required to use an
automated test tool to implement these types of tests, since they are difficult to do
manually. Imagine coordinating 100 people to hit the site at the same time. Now try
100,000 people. Generally, the tool will pay for itself the second time you use it. Once
the tool is set up, running another test is just a click away.
Security
Even if credit card payments are not accepted, security is very important. The web site
will be the only exposure for some customers to know about a company. And, if that
exposure is a hacked page, the customers won't feel safe doing business with the
company using internet.
• Directory setup
The most elementary step of web security is proper setup of directories. Each directory
should have an index.html or main.html page so a directory listing doesn't appear.
Many sites use SSL for secure transactions. While entering an SSL site, there will be a
browser warning and the HTTP in the location field on the browser will change to
HTTPS. If the development group uses SSL it is to be ensured that, there is an alternate
page for browser with versions less than 3.0, since SSL is not compatible with those
53
Software
Testing
browsers. Sufficient warnings while entering and leaving the secured site are to be
provided. Also it needs to be checked whether there is a time-out limit or what happens
if the user tries a transaction after the timeout?
• Logins
In order to validate users, several sites require customers to login. This makes it easier
for the customer since they don't have to re-enter personal information every time. You
need to verify that the system does not allow invalid usernames/password and that does
allow valid logins. Is there a maximum number of failed logins allowed before the
server locks out the current user? Is the lockout based on IP? What happens after the
maximum failed login attempts; what are the rules for password selection – these needs
to be checked.
• Log files
Behind the scenes, it needs to be verified that server logs are working properly. Does
the log track every transaction? Does it track unsuccessful login attempts? Does it only
track stolen credit card usage? What does it store for each transaction? IP address? User
name?
• Scripting languages
Scripting languages are a constant source of security holes. The details are different for
each language. Some allow access to the root directory. Others only allow access to the
mail server, but a resourceful hacker could mail the servers username and password
files to themselves. Find out what scripting languages are being used and research the
loopholes. It might also be a good idea to subscribe to a security newsgroup that
discusses the language that is being tested.
Conclusion
Whether an Internet or intranet or extranet application is being tested, testing for the web
can be more challenging than non-web applications. Users have high expectations for
web page quality. In many cases, the page is up for public relations, just as much as
functionality, so the impression must be perfect.
20
54
Software
Testing
Testing Terms
Application: A single software product that mayor may not fully support a business
function.
Black-box Testing: A test technique that focuses on testing the functionality of the
program, component, or application against its specifications without knowledge of
how the system is constructed; usually data or business process driven.
Boundary Value Analysis: A data selection technique in which test data is chosen
from the "boundaries" of the input or output domain classes, data structures, and
procedure parameters. Choices often include the actual minimum and maximum
boundary values, the maximum value plus or minus one, and the minimum value plus
or minus one.
55
Software
Testing
Checkpoint: A formal review of key project deliverables. One checkpoint is defined for
each key project deliverable, and verification and validation must be done for each of
these deliverables that is produced.
Cost of Quality (COQ): Money spent above and beyond expected production costs (labor,
materials, equipment) to ensure that the product the customer receives is a quality (defect free)
product The Cost of Quality includes prevention, appraisal, and correction or repair costs.
Conversion Testing: Validates the effectiveness of data conversion processes, including field-
to-field mapping, and data translation.
Decision Coverage: A white-box testing technique that measures the number of -or
percentage -of decision directions executed by the test case designed. 100% Decision
56
Software
Testing
coverage would indicate that all decision directions had been executed at least once
during testing. Alternatively, each logical path through the program can be tested. Often, paths
through the program are grouped into a finite set of classes, and one path from each class is
tested
Defect: Operationally, it is useful to work with two definitions of a defect: (1) From the
producer's viewpoint: a product requirement that has not been met or a product attribute
possessed by a product or a function performed by a product that is not in the statement of
requirements that define the product; or (2) From the customer's viewpoint: anything that
causes customer dissatisfaction, whether in the statement of requirements or not.
Driver: Code that sets up an environment and calls a module for test.
Defect Tracking Tools: Tools for documenting defects as they are found during testing
and for tracking their status through to resolution.
Desk Checking: The most traditional means for analyzing a system to a program. The
developer of a system or program conducts desk checking. The process involves
reviewing the complete product to ensure that it is structurally sound and that the
standards and requirements have been met. This tool can also be used on artifacts created
during analysis and design.
Entrance Criteria: Required conditions and standards for work product quality that
must be present or met for entry into the next stage of the software development process.
57
Software
Testing
value of the larger class of data. For example, a business rule that indicates that a program
should edit salaries within a given range ($10,000 -$15,000) might have 3 equivalence
classes to test:
Error Guessing: The data selection technique for picking values that seems likely to cause
defects. This technique is based upon the theory that test cases and test data can be developed
based on the intuition and experience of the tester.
Exhaustive Testing: Executing the program through all possible combinations of values
for program variables. Exit Criteria: Standards for work product quality, which block the
promotion of incomplete or defective work products to subsequent stages of the software
development process.
Functional Testing: Application of test data derived from the specified functional
requirements without regard to the final program structure.
58
Software
Testing
exist. An inspection identifies defects, but does not attempt to correct them. Authors take
corrective actions and arrange follow-up reviews as needed.
Integration Testing: This test begins after two or more programs or application
components have been successfully unit tested. The development team to validate the technical
quality or design of the application conducts it. It is the first level of testing which formally
integrates a set of programs that communicate among themselves via messages or files (a
client and its server(s), a string of batch programs, or a set of on-line modules within a dialog or
conversation).
Life Cycle Testing: The process of verifying the consistency, completeness, and
correctness of software at each stage of the development lifecycle.
Performance Test: Validates that both the on-line response time and batch run times meet
the defined performance requirements.
Quality Assurance (QA): The set of support activities (including facilitation, training,
measurement, and analysis) needed to provide adequate confidence that processes are
established and continuously improved to produce products that meet specifications and are fit
for use.
Quality Control (QC): The process by which product quality is compared with
applicable standards, and the action taken when nonconformance is detected. Its focus is defect
59
Software
Testing
detection and removal. This is a line function; that is, the performance of these tasks is the
responsibility of the people working within the process.
Recovery Test: Evaluate the contingency features built into the application for handling
interruptions and for returning to specific points in Life application processing cycle,
including. -checkpoints, backups, restores, and restarts. This test also assures that disaster
recovery is possible.
Structural Testing: A testing method in which the test data are derived solely from the
program structure.
Stub: Special code segments -that when invoked by a code segment under testing sinuate
the behavior of designed and specified modules not yet constructed.
System test: During this event, the entire system is tested to verify that all functional,
information, structural and quality requirements have been met. A predetermined
combination of tests is designed that, when executed successfully, satisfy management
that the system meets specifications. System testing verifies the functional quality of the
system in addition to all external interfaces, manual procedures, restart and recovery, and
human-computer interfaces. It also verifies that interfaces between the application and
60
Software
Testing
open environment work correctly, that JCL functions correctly, and that the application
functions appropriately with the Database Management System, Operations environment,
and any communications systems.
Test Case -
A test case is a document that describes an input, action, or event and an expected
response, to determine if a feature of an application is working correctly. A test case
should contain particulars such as test case identifier, test case name, objective, test
conditions/setup, input data requirements, steps, and expected results.
Test Case Specification: -An individual test condition, executed as part of a larger test
contributes to the test's objectives. Test cases document the input, expected results, execution
conditions of a given test item. Test cases are broken down into one or more detailed test scripts
and test data conditions for execution.
Test Data Set: Set of input elements used in the testing process
Test Design Specification: A document that specifies the details of the test approach for
a software feature or a combination of features and identifies the associated tests.
Test Log: A chronological record of relevant details about the execution of tests.
Test Plan: A document describing the intended scope, approach, resources, and schedule
of testing activities. It identifies test items, the features to be tested, the testing tasks, the
personnel performing each task, and any risks requiring contingency planning.
61
Software
Testing
Test Summary Report A document that describes testing activities and results and
evaluates the corresponding test items.
Test Scripts: A tool that specifies an order of actions that should be performed during a
test session. The script also contains expected results. Test scripts may be manually
prepared using paper forms, or may be automated using capture/playback tools or other
kinds of automated scripting tools.
Usability Test: The purpose of this event is to review the application user interface and
other human factors of the application with the people who will be using the application.
This is to ensure that the design (layout and sequence, etc.) enables the business functions
to be executed as easily and intuitively as possible. This review includes assuring that the
user interface adheres to documented User Interface standards, and should be conducted
early in the design stage of development. Ideally, an application prototype is used to walk
the client group through various business scenarios, although paper copies of screens,
windows, menus, and reports can be used.
User Acceptance Test: User Acceptance Testing (UAT) is conducted to ensure that the
system meets the needs of the organization and the end user/customer. It validates that
the system will work as intended by the test in the real world, and is based on real world
business scenarios, not system requirements. Essentially, this test validates that the
RIGHT system was built.
62
Software
Testing
Verification:
I) The process of determining whether the products of a given phase of the
software development cycle fulfill the requirements established during
the previous phase.
II) The act of reviewing, inspecting, testing, checking, auditing, or otherwise
establishing and documenting whether items, processes, services, or
documents conform to specified requirements.
Walkthrough: A manual analysis technique in which the module author describes the
module's structure and logic to an audience of colleagues. Techniques focus on error
detection, not correction. Will usually sue a formal set of standards or criteria as the basis
of the review.
White-box Testing: A testing technique that assumes that the path of the logic in a
program unit or component is known. White-box testing usually consists of testing paths,
branch by branch, to produce predictable results. This technique is usually used during
tests executed by the development team, such as Unit or Component testing.
21
Technical
Questions
1. What is Software Testing?
The process of exercising or evaluating a system or system component by manual or
automated means to verify that it satisfies specified requirements or to identify
differences between expected and actual results.
63
Software
Testing
6. What are the entry criteria for Functionality and Performance testing?
Functional testing:
Functional Specification /BRS (CRS)/User Manual. An integrated application, Stable for
testing.
7. Why do you go for White box testing, when Black box testing is available?
A benchmark that certifies Commercial (Business) aspects and also functional (technical)
aspects is objectives of black box testing. Here loops, structures, arrays, conditions, files,
etc are very micro level but they arc Basement for any application, So White box takes
these things in Macro level and test these things
64
Software
Testing
11. Tell names of some testing type which you learnt or experienced?
Any 5 or 6 types which are related to companies profile is good to say in the interview,
• Ad - Hoc testing
• Cookie Testing
• CET (Customer Experience Test)
• Depth Test
• Event-Driven
• Performance Testing
• Recovery testing
• Sanity Test
• Security Testing
• Smoke testing
• Web Testing
13. After completing testing, what would you deliver to the client?
Test deliverables namely Test plan Test Data Test design Documents (Condition/Cases)
• Defect Reports
• Test Closure Documents
• Test Metrics
65
Software
Testing
When Test Condition is executed its result should be compared to Test result (expected
result), as Test data is needed for this here comes the role of test Bed where Test data is
made ready.
19. Why do we prepare test condition, test cases, test script (Before Starting
Testing)?
These are test design document which are used to execute the actual testing Without
which execution of testing is impossible, finally this execution is going to find the bugs to
be fixed so we have prepare this documents.
20. Is it not waste of time in preparing the test condition, test case & Test Script?
No document prepared in any process is waste of rime, That too test design documents
which plays vital role in test execution can never be said waste of time as without which
proper testing cannot be done.
22. What kind of Document you need for going for a Functional testing?
Functional specification is the ultimate document, which expresses all the functionalities
of the application and other documents like user manual and BRS are also need for
functional testing. Gap analysis document will add value to understand expected and
existing system.
66
Software
Testing
No, .The system as a whole can be tested only if all modules arc integrated and all
modules work correctly System testing should be done before UAT (User Acceptance
testing) and Before Unit Testing.
Test Automation:
67
Software
Testing
Here are some of the attributes of test automation that can be measured,
Maintainability
• Definition: The effort needed to update the test automation suites for each new
release.
• Possible measurements: The possible measurements can be e.g. the average work
effort in hours to update a test suite.
68
Software
Testing
Reliability
Flexibility
• Definition: The ease of working with all the different kinds of automation test
ware.
• Possible measurements: The time and effort needed to identify, locate, restore,
combine and execute the different test automation test ware.
Efficiency
• Definition: The total cost related to the effort needed for the automation.
• Possible measurements: Monitoring over time the total cost of automated testing,
i.e. resources, material, etc.
Portability
Robustness
Usability
• Definition: The extent to which automation can be used by different types of users
(Developers, non-technical people or other users etc.,)
• Possible measurements: The time needed to train users to become confident and
productive with test automation.
69
Software
Testing
a. Test Type Expected. (E.g. Regression Testing / Functional Testing /
Performance-Load Testing)
b. Tool Cost Vs Project Testing Budget Estimation.
c. Protocol Support by Tool Vs. Application Designed Protocol.
d. Tools Limitations Vs Application Test Requirements
e. H/W, S/W & Platform Support of Tool Vs Application test Scope for these
attributes.
f. Tool License Limitations / Availability Vs Test Requirements.(Tools
Scalability)
35. How one will evaluate the tool for test automation?
Whenever a Tool has to be evaluated one need to go through few important verifications /
validations of the tool like,
a. Platform Support from the Tool.
b. Protocols / Technologies Support.
c. Tool Cost
d. Tool Type with its Features Vs Our Requirements Analysis.
e. Tool Usage Comparisons with other similar available tools in market.
f. Tool’s Compatibility with our Application Architecture and Development
Technologies.
g. Tool Configuration & Deployment Requirements.
h. Tools Limitations Analysis.
70
Software
Testing
myself as on un-installation of JDK and Java-Addins my application
works fine.
71
Software
Testing
41. What are the types of scripting techniques for test automation ?
Scripting Technique: how to structure automated test scripts for maximum benefit and
Minimum impact of software changes, scripting issues, scripting approaches: linear,
Shared, data-driven and programmed, script pre-processing, minimizing the impact of
Software changes on test scripts.
The major ones used are,
a. Data-Driven Scripting
b. Centralized Application Specific / Generic Compiled Modules / Library
Development.
c. Parent Child Scripting.
d. Techniques to Generalize the Scripts.
e. Increasing the factor of Reusability of the Script.
43. What tools are available for support of testing during software development life
cycle?
Test Director for Test Management, Bugzilla for Bug Tracking and Notification etc are
the tools for Support of Testing.
72
Software
Testing
If one talk about limitations of automating software testing, then to mention few,
a. Automation Needs lots of time in the initial stage of automation.
b. Every tool will have its own limitations with respect to protocol support,
technologies supported, object recognition, platform supported etc due to
which not 100% of the Application can be automation because there is
always something limited to the tool which we have to overcome with
R&D.
c. Tool’s Memory Utilization is also one the important factor which blocks
the application’s memory resources and creates problems to application in
few cases like Java Applications etc.
73