Sie sind auf Seite 1von 72

What is a Defect? What are its Causes.

A Software Defect / Bug is a condition in a software product which does not meet a
software requirement (as stated in the requirement specifications) or end-user
expectations (which may not be specified but are reasonable). In other words, a defect
is an error in coding or logic that causes a program to malfunction or to produce
incorrect/unexpected results.

A program that contains a large number of bugs is said to be buggy.

Reports detailing bugs in software are known as bug reports

Applications for tracking bugs are known as bug tracking tools.

The process of finding the cause of bugs is known as debugging.

The process of intentionally injecting bugs in a software program, to estimate test


coverage by monitoring the detection of those bugs, is known as bebugging.
CLASSIFICATION
Software Defects/ Bugs are normally classified as per:

Severity / Impact

Probability / Visibility

Priority / Urgency

Related Dimension of Quality

Related Module / Component

Phase Detected

Phase Injected

Causes Of Defect
If someone makes an error or mistake in using the software, this may lead
directly to a problem.
The software is used incorrectly and so does not behave as we expected.
people also design and build the software and they can make mistakes during
the design and build.
Flaws in the software.
Not all defects result in failures.

What is Quality ?Explain Quality viewPoints.


Projects aim to deliver software to specification. For the project to deliver what the
customer needs requires a correct specification. Additionally, the delivered system must
meet the specification. This is known as
validation ('is this the right specification?')
verification ('is the system correct to specification?').

Explain fundamental principles in testing?


A number of testing principles have been suggested over the past 40 years and offer
general guidelines common for all testing.
Principle 1 - Testing shows presence of defects
Testing can show that defects are present, but cannot prove that there are no
defects. Testing reduces the probability of undiscovered defects remaining in the
software but, even if no defects are found, it is not a proof of correctness.

Principle 2 - Exhaustive testing is impossible


Testing everything (all combinations of inputs and preconditions) is not feasible
except for trivial cases. Instead of exhaustive testing, risk analysis and priorities should
be used to focus testing efforts.
Principle 3 - Early testing
To find defects early, testing activities shall be started as early as possible in the
software or system development life cycle, and shall be focused on defined objectives.
Principle 4 - Defect clustering
Testing effort shall be focused proportionally to the expected and later observed
defect density of modules. A small number of modules usually contains most of the
defects discovered during pre-release testing, or is responsible for most of the
operational failures.
Principle 5 - Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test
cases will no longer find any new defects. To overcome this "pesticide paradox", test
cases need to be regularly reviewed and revised, and new and different tests need to
be written to exercise different parts of the software or system to find potentially more
defects.
Principle 6 - Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software
is tested differently from an e-commerce site.
Principle 7 - Absence-of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not
fulfill the users' needs and expectations.
Discuss Objectives of ST? OR Explain Software testing.
The definition starts with a description of testing as a process and then lists some
objectives of the test process
Process Testing is a process rather than a single activity - there are a series of
activities involved.
All life cycle activities If we can find and fix requirements defects at the
requirements stage, that must make commercial sense. We'll build the right software,
correctly and at a lower cost overall. So, the thought process of designing tests early in
the life cycle can help to prevent defects from being introduced into code. We
sometimes refer to this as 'verifying the test basis via the test design'. The test basis
includes documents such as the requirements and design specifications.
Both static and dynamic tests where the software code is executed to demonstrate
the results of running tests (often called dynamic testing) we can also test and find
defects without executing code. This is called static testing.
Planning Activities take place before and after test execution. We need to manage
the testing; for example, we plan what we want to do; we control the test activities; we
report on testing progress and the status of the software under test; and we finalize or
close testing when a phase completes.
Preparation We need to choose what testing we'll do, by selecting test conditions
and designing test cases.

Evaluation As well as executing the tests, we must check the results and evaluate
the software under test and the completion criteria, which help us decide whether we
have finished testing and whether the software product has passed the tests.
Software products and related work products We don't just test code. We test
the requirements and design specifications, and we test related documents such as
operation, user and training material.

Define the Following Terms:


1. Bug A software bug is an error, flaw, failure, or fault in a computer program or system
that produces an incorrect or unexpected result, or causes it to behave in unintended
ways. Most bugs arise from mistakes and errors made by people in either a program's
source code or its design, and a few are caused by compilers producing incorrect code.
A program that contains a large number of bugs, and/or bugs that seriously interfere
with its functionality, is said to be buggy.
2. defect A software defect is a deficiency in a software product that causes it to
perform unexpectedly. From a software users perspective, a defect is anything that
causes the software not to meet their expectations. In this context, a software user can
be either a person or another piece of software.
3. error error resulting from bad code in some program involved in producing the
erroneous result

4. failure Failure is the state or condition of not meeting a desirable or intended


objective, and may be viewed as the opposite of success.
5. fault An accidental condition that causes a functional unit to fail to perform its
required function.
6. mistake An error or fault resulting from defective judgment, deficient knowledge, or
carelessness
7. quality a measure of excellence or a state of being free from defects, deficiencies
and significant variations. It is brought about by strict and consistent commitment to
certain standards that achieve uniformity of a product in order to satisfy specific
customer or user requirements.
8. risk Risk is the potential of loss (an undesirable outcome, however not necessarily
so) resulting from a given action, activity and/or inaction. The notion implies that a
choice having an influence on the outcome sometimes exists (or existed). Potential
losses themselves may also be called "risks".
Explain psychology of software testing.
Psychology of Testing :
1) The people made mistakes but they do not like to admit them. One goal of testing
software is uncover disturbancing between software and specification or costumer
needs, therefore failure found must be reported to developer.
2) Can developer test his own program?, it is on important & frequently asked question
The universal valid answer does not exist, if tester also author of program, they must
examine their own work critically.
3) If developer implements a fundamental design error e.g. if they misunderstand the
conceptual formation then it is possible that he or she may not find these using their
own test.
4) On the other hand it is an advantage to have a good knowledge of own test object it
is not necessary to learn test object and therefore time is saved, Management has to
decide when it is the advantage to save time even with the disadvantage of blindness
for their own errors.
5) Independent testing team tends to increase quality & comprehensiveness of the test
tester can took at the test object without Bias (partiality).
8) It is not their product and possible assumption & unisunderstanding of the tester. The
tester must acquire necessary knowledge of the test object in order to create a test
cases with corresponding time and cost.
9) The tester comes along with deeper testing knowledge, developer which doesnt
have or must first acquire.
10) It is job of the tester to report failure and disturbances observe to management. The
manner of the reporting can contribute the cooperation between developer and tester.
Explain fundamental test process.

Explain V-model of Testing?

V - model was developed to address some of problem experience using


traditional waterfall approach defects were being found to late in life cycle testing
was not involved in until end of project testing also added load time due to its late
involvement.
The V-model provides guidance that testing needs to begin as early as possible
in life cycle.
V-model shows testing is not only execution base activity there are variety of
activity that need to perform before end of coding phase.
These activities should be carried out in parallel with development activities and
tester needs to work with developer, so they can perform these activity and
produce a good activity result.
Therefore in V-model s/w tester is involved in development team from day-1.
V-model was uses 4 test levels, each with there own objectives.
1.Component testing is a method where testing of each component in an
application is done separately. Suppose, in an application there are 5
components. Testing of each 5 components separately and efficiently is called as

component testing.Component testing is also known as module and program


testing. Component testing is done by the tester.
2.Acceptance testing is basically done by the user or customer although other
stakeholders may be involved as well.The goal of acceptance testing is to
establish confidence in the system.Acceptance testing is most often focused on a
validation type testing.
3.Integration testing Testing of integrated modules to verify combined
functionality after integration. Modules are typically code modules, individual
applications, client and server applications on a network, etc. This type of testing
is especially relevant to client/server and distributed systems.
4.System testing Entire system is tested as per the requirements. Black-box
type testing that is based on overall requirements specifications, covers all
combined parts of a system.
What is Test Strategy in ST?
The choice of test approaches or test strategy is one of the most powerful factor in
the success of the test effort and the accuracy of the test plans and estimates. This
factor is under the control of the testers and test leaders.
Lets survey the major types of test strategies that are commonly found:

Analytical: The risk-based strategy involves performing a risk analysis using


project documents and stakeholder input, then planning, estimating, designing,
and prioritizing the tests based on risk. Another analytical test strategy is the
requirements-based strategy, where an analysis of the requirements specification
forms the basis for planning, estimating and designing tests. Analytical test
strategies have in common the use of some formal or informal analytical
technique, usually during the requirements and design stages of the project.

Model-based:You can build mathematical models for loading and response for e
commerce servers, and test based on that model. If the behavior of the system
under test conforms to that predicted by the model, the system is deemed to be
working. Model-based test strategies have in common the creation or selection of
some formal or informal model for critical system behaviors, usually during the
requirements and design stages of the project.

Methodical: You might have a checklist that you have put together over the
years that suggests the major areas of testing to run or you might follow an
industry-standard for software quality, such as ISO 9126, for your outline of major
test areas. You then methodically design, implement and execute tests following
this outline. Methodical test strategies have in common the adherence to a preplanned, systematized approach that has been developed in-house, assembled
from various concepts developed inhouse and gathered from outside, or adapted
significantly from outside ideas and may have an early or late point of
involvement for testing.

Process or standard-compliant: Let us take an example to understand this.


You might adopt the IEEE 829 standard for your testing, using books such as
[Craig, 2002] or [Drabick, 2004] to fill in the methodological gaps.

Dynamic: Let us take an example to understand this. You might create a


lightweight set of testing guide lines that focus on rapid adaptation or known
weaknesses in software. Dynamic strategies, such as exploratory testing

Consultative or directed: Let us take an example to understand this. You might


ask the users or developers of the system to tell you what to test or even rely on
them to do the testing. Consultative or directed strategies have in common the
reliance on a group of non-testers to guide or perform the testing effort and
typically emphasize the later stages of testing simply due to the lack of
recognition of the value of early testing.

Regression-averse: Let us take an example to understand this. You might try to


automate all the tests of system functionality so that, whenever anything
changes, you can re-run every test to ensure nothing has broken. Regressionaverse strategies have in common a set of procedures usually automated
that allow them to detect regression defects. A regression-averse strategy may
involve automating functional tests prior to release of the function, in which case
it requires early testing, but sometimes the testing is almost entirely focused on
testing functions that already have been released, which is in some sense a form
of post release test involvement.
Whats is Extreme Programming and what are its characteristics? Or Explai AGILE
methedology?
Extreme programming is currently one of the most well known agile development
life cycle model.
2. The methodology claims to be more human friendly than the traditional
development method.
3. Using Agile model, Developer can develop simple and Interesting GUI for S/W.
Some of the characteristics of XP are:
It promotes the generation of business stories to define functionality.
It demands an on side costumer for continues feedback and to define and carry
out functional test.
It promotes pair programming and shared code ownership among developers.
It states that component test scripts shall be written before code is written and
those test should be automated.
It states that integration and testing of code shall happen several times a day.
It always states that we should always implement simplest that should meet
solutions to the todays problem.
With XP there are numerous iterations each requiring testing.
XP developer write every test case they can think of and automate them.

Everytime a change is made in the code it is component tested and then


integrated with the existing code.

Whats is Extreme Programming and what are its characteristics? Or Explai AGILE
methedology?
Extreme programming is currently one of the most well known agile development
life cycle model.
2. The methodology claims to be more human friendly than the traditional
development method.
3. Using Agile model, Developer can develop simple and Interesting GUI for S/W.
Some of the characteristics of XP are:
It promotes the generation of business stories to define functionality.
It demands an on side costumer for continues feedback and to define and carry
out functional test.
It promotes pair programming and shared code ownership among developers.
It states that component test scripts shall be written before code is written and
those test should be automated.
It states that integration and testing of code shall happen several times a day.
It always states that we should always implement simplest that should meet
solutions to the todays problem.
With XP there are numerous iterations each requiring testing.
XP developer write every test case they can think of and automate them.
Everytime a change is made in the code it is component tested and then
integrated with the existing code.

What are different level of testing?


Unit Testing

This type of testing is performed by developers before the setup is handed over
to the testing team to formally execute the test cases.

Unit testing is performed by the respective developers on the individual units of


source code assigned areas.

The developers use test data that is different from the test data of the quality
assurance team.

The goal of unit testing is to isolate each part of the program and show that
individual parts are correct in terms of requirements and functionality.

Limitations of Unit Testing

Testing cannot catch each and every bug in an application.

It is impossible to evaluate every execution path in every software application.


The same is the case with unit testing.

Integration Testing

Integration testing is defined as the testing of combined parts of an application to


determine if they function correctly. Integration testing can be done in two ways:
Bottom-up integration testing and Top-down integration testing.

System Testing

System testing tests the system as a whole. Once all the components are
integrated, the application as a whole is tested rigorously to see that it meets the
specified Quality Standards.

This type of testing is performed by a specialized testing team.

System testing is important because of the following reasons:

System testing is the first step in the Software Development Life Cycle, where
the application is tested as a whole.

The application is tested thoroughly to verify that it meets the functional and
technical specifications.

Acceptance Testing

This is arguably the most important type of testing, as it is conducted by the


Quality Assurance Team who will gauge whether the application meets the
intended specifications and satisfies the clients requirement.

The QA team will have a set of pre-written scenarios and test cases that will be
used to test the application.

More ideas will be shared about the application and more tests can be
performed on it to gauge its accuracy and the reasons why the project was
initiated.

Acceptance tests are not only intended to point out simple spelling mistakes,
cosmetic errors, or interface gaps, but also to point out any bugs in the
application that will result in system crashes or major errors in the application.
Explain Triggers for Maintanance Testing?
o As stated maintenance testing is done on an existing operational system.
o It is triggered by modifications, migration, or retirement of the system.
o Modifications include planned enhancement changes (e.g. releasebased), corrective and emergency changes, and changes of environment,
such as planned operating system or database upgrades, or patches to
newly exposed or discovered vulnerabilities of the operating system.
o Maintenance testing for migration (e.g. from one platform to another)
should include operational testing of the new environment, as well as the
changed software.
o Maintenance testing for the retirement of a system may include the testing
of data migration or archiving, if long data-retention periods are required.
o Since modifications are most often the main part of maintenance testing
for most organizations, this will be discussed in more detail.

o From the point of view of testing, there are two types of modifications.
There are modifications in which testing may be planned, and there
are ad-hoc corrective modifications, which cannot be planned at all.
Planned modifications
o The following types of planned modification may be identified:
perfective modifications (adapting software to the user's wishes, for instance by
supplying new functions or enhancing performance);
adaptive modifications (adapting software to environmental changes such as
new hardware, new systems software or new legislation);
corrective planned modifications (deferrable correction of defects).
o Ad-hoc corrective modifications are concerned with defects requiring an
immediate solution, e.g. a production run which dumps late at night, a
network that goes down with a few hundred users on line, a mailing with
incorrect addresses.
o There are different rules and different procedures for solving problems of
this kind.
o It will be impossible to take the steps required for a structured approach to
testing.
o If, however, a number of activities are carried out prior to a possible
malfunction, it may be possible to achieve a situation in which reliable
tests car. be executed in spite of 'panic stations' all round.
o Even in the event of ad-hoc modifications, it is therefore possible to bring
about an improvement in quality by adopting a specific test approach.
Explain Maintenance Testing?
Maintenance testing :
1) Once deployed, system is often in service for year or even decades during this time
the system & its operational environment is often corrected, change or extended.
Testing i.e. executing during these life cycle period is called Maintainance Testing.
2) Maintainance testing is different from maintenability testing, which defines how easy
it is to maintain the system.
3) The development and test process applicable to new developments doesnt change
fundamentally for maintainance purpose the same test process steps will be apply &
depending on size & risk of the changes made, several levels of testing are carried out
during maintainance testing. A component test,, system test, Acceptance Test.
4) A maintainance test process useally begins with the receipt of an application for a
change. The test manager will use these as basis for producing test plan.
On the receipt of new or change the specification, corresponding test cases are
specified or adapted.
Once the necessary changes have been made, regression testing is performed.
Useally maintainance testing will consist of two parts
i)
Testing Changes

ii)
iii)
iv)
v)

ii) Regression Test to show that rest of system has not been affected
by maintainance work. The maintainance testing will perform on
software under following condition.
A) If costumer or end user requires support or they are not
understanding some of the functionality of s/w.
B) When developer wants to enhance / upgrade a software.
C) When any changes are informed by user.

Alpha Testing
This test is the first stage of testing and will be performed amongst the teams
(developer and QA teams). Unit testing, integration testing and system testing when
combined together is known as alpha testing. During this phase, the following aspects
will be tested in the application:

Spelling Mistakes

Broken Links

Cloudy Directions

The Application will be tested on machines with the lowest specification to test
loading times and any latency problems.

Beta Testing
This test is performed after alpha testing has been successfully performed. In beta
testing, a sample of the intended audience tests the application. Beta testing is also
known as pre-release testing. Beta test versions of software are ideally distributed to
a wide audience on the Web, partly to give the program a "real-world" test and partly to
provide a preview of the next release. In this phase, the audience will be testing the
following:

Users will install, run the application and send their feedback to the project team.

Typographical errors, confusing application flow, and even crashes.

Getting the feedback, the project team can fix the problems before releasing the
software to the actual users.

The more issues you fix that solve real user problems, the higher the quality of
your application will be.

Having a higher-quality application when you release it to the general public will
increase customer satisfaction.

Bug or defectlife cycle includes following steps or status:


1. New: When a defect is logged and posted for the first time. Its state is given as new.
2. Assigned: After the tester has posted the bug, the lead of the tester approves that the bug
is genuine and he assigns the bug to corresponding developer and the developer team. Its
state given as assigned.
3. Open: At this state the developer has started analyzing and working on the defect fix.

4. Fixed: When developer makes necessary code changes and verifies the changes then
he/she can make bug status as Fixed and the bug is passed to testing team.
5. Pending retest: After fixing the defect the developer has given that particular code for
retesting to the tester. Here the testing is pending on the testers end. Hence its status is
pending retest.
6. Retest: At this stage the tester do the retesting of the changed code which developer has
given to him to check whether the defect got fixed or not.
7. Verified: The tester tests the bug again after it got fixed by the developer. If the bug is
not present in the software, he approves that the bug is fixed and changes the status to
verified.
8. Reopen: If the bug still exists even after the bug is fixed by the developer, the tester
changes the status to reopened. The bug goes through the life cycle once again.
9. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no
longer exists in the software, he changes the status of the bug to closed. This state
means that the bug is fixed, tested and approved.
10. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the
bug, then one bug status is changed to duplicate.
11. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the
state of the bug is changed to rejected.
12. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in
next releases. The reasons for changing the bug to this state have many factors. Some of
them are priority of the bug may be low, lack of time for the release or the bug may not
have major effect on the software.
13. Not a bug: The state given as Not a bug if there is no change in the functionality of
the application. For an example: If customer asks for some change in the look and field of
the application like change of colour of some text then it is not a bug but just some
change in the looks of the application.

Explain Phases of Formal reviews?


1) PLANNING.
1.

The review process for particular review process begins with Request for
Review by the author to moderator.

2.

The moderator is often assign to take care of scheduling if review that


means the time, date, venue, agenda & invitation of review is made by
moderator.

3.

On a project level, the project planning needs to allow time for review &
rework activities, providing engineers with time to thurrowly participate in
review.

2) KICK-OFF
i) An optional step in review process in kickoff meeting the goal of these meeting
is to get everybody on same wavelength regarding document under review & to
commit to time that will be spent on checking.
ii) Also the result of entry check & exit criteria are discussed in case of more
formal review.
iii) Roll assignment, checking rate, the pages to be check process changes &
possible other questions regarding formal reviews are also discuss during this
meeting.
3)Preparation
i) The participants who are individualy on document under review using the
related documents, procedure rules & checklist proper.
The individual participant identify defects according to their understanding of
document & role.
ii) All issues are recorded preferably using logging form spelling mistakes are
recorded on document under review but not mentioned during review meeting.

4) REVIEW MEETING:
i) Logging Phase: Discussion Phase Decision Phase During logging phase,
the issues e.g. defects that have been identified during preparation phase are
mention phase by phase, reviewer by reviewer & are logged either by author or
recorder.
REWORK
1.Based on defects detected, author will improve document under review stepby-step.
2. Not every defect i.e. found leads to rework. It is the authors responsibility to
judge if defects has to be fixed.
3.If no. of defects per page exceeds the exit criteria then only the rework should
be conducted for author.

FOLLOWUP:
The moderator is responsible for ensuring that satisfactory
action have been taken on all logged defects, process improvement suggestion
& change request.
Although the moderator checks to make sure that author has taken action an all
defects, it is not necessary for moderator to check all corrections in detail for
more formal review type moderator checks only for complains to exit criteria
Explain the Different Types OF reviews?
1. Walkthrough:

It is not a formal process

It is led by the authors

Author guide the participants through the document according to his or her
thought process to achieve a common understanding and to gather feedback.

Useful for the people if they are not from the software discipline, who are not
used to or cannot easily understand software development process.

Is especially useful for higher level documents like requirement specification, etc.

The goals of a walkthrough:

2.

To present the documents both within and outside the software discipline in order
to gather the information regarding the topic under documentation.

To explain or do the knowledge transfer and evaluate the contents of the


document

To achieve a common understanding and to gather feedback.

To examine and discuss the validity of the proposed solutions

Technical review:

It is less formal review

It is led by the trained moderator but can also be led by a technical expert

It is often performed as a peer review without management participation

Defects are found by the experts (such as architects, designers, key users) who
focus on the content of the document.

In practice, technical reviews vary from quite informal to very formal

The goals of the technical review are:

3.

To ensure that an early stage the technical concepts are used correctly

To access the value of technical concepts and alternatives in the product

To have consistency in the use and representation of technical concepts

To inform participants about the technical content of the document

Inspection:

It is the most formal review type

It is led by the trained moderators

During inspection the documents are prepared and checked thoroughly by the
reviewers before the meeting

It involves peers to examine the product

A separate preparation is carried out during which the product is examined and
the defects are found

The defects found are documented in a logging list or issue log

A formal follow-up is carried out by the moderator applying exit criteria

The goals of inspection are:

It helps the author to improve the quality of the document under inspection

It removes defects efficiently and as early as possible

It improve product quality

It create common understanding by exchanging information

It learn from defects found and prevent the occurrence of similar defects

What is Static Testing?


Static testing is the testing of the software work products manually, or with a set
of tools, but they are not executed.
It starts early in the Life cycle and so it is done during the verification process.
It does not need computer as the testing of program is done without executing
the program. For example: reviewing, walk through, inspection, etc.
The uses of static testing are as follows:
Since static testing can start early in the life cycle so early feedback on quality
issues can be established.
As the defects are getting detected at an early stage so the rework cost most
often relatively low.
Development productivity is likely to increase because of the less rework effort.\
Types of the defects that are easier to find during the static testing are:
deviation from standards, missing requirements, design defects, nonmaintainable code and inconsistent interface specifications.
Static tests contribute to the increased awareness of quality issues.
Explain the Success factor or reviews?

Find a 'champion'
A champion is needed, one who will lead the process on a project or organizational level. They need expertise, enthusiasm and a practical mindset in order to guide
moderators and participants.
The authority of this champion should be clear to the entire organization.
Management support is also essential for success.
Pick things that really count
Select the documents for review that are most important in a project. Reviewing
highly critical, upstream documents like requirements and architecture will most
certainly show the benefits of the review process to the project.
These invested review hours will have a clear and high return on investment.
In addition make sure each review has a clear objective and the correct type of
review is selected that matches the defined objective.
Explicitly plan and track review activities
To ensure that reviews become part of the day-to-day activities, the hours to be
spent should be made visible within each project plan.
The engineers involved are prompted to schedule time for preparation and,
very importantly, rework.
Train participants
It is important that training is provided in review techniques, especially the more
formal techniques, such as inspection.
Otherwise the process is likely to be impeded by those who don't understand the
process and the reasoning behind it.
Manage people issues
Reviews are about evaluating someone's document.
Some reviews tend to get too personal when they are not well managed by the
moderator.
Follow the rules but keep it simple
Follow all the formal rules until you know why and how to modify them, but make
the process only as formal as the project culture or maturity level allows.
Do not become too theoretical or too detailed.
Continuously improve process and tools
Continuous improvement of process and supporting tools (e.g. checklists),based
upon the ideas of participants, ensures the motivation of the engineers involved.
Motivation is the key to a successful change process.
Report results
Report quantified results and benefits to all those involved as soon as possible,
and discuss the consequences of defects if they had not been found this early.

Costs should of course be tracked, but benefits, especially when problems don't
occur in the future, should be made visible by quantifying the benefits as well as
the costs.
Just do it!
The process is simple but not easy.
Each step of the process is clear, but experience is needed to executethem
correctly.

Explain Test Types and Target of Testing?


Black-box testing is a method of software testing that examines the functionality of an
application without peering into its internal structures or workings. This method of test
can be applied to virtually every level of software
testing: unit, integration, system and acceptance. It typically comprises most if not all
higher level testing, but can also dominate unit testing as well.
White-box testing (also known as clear box testing, glass box testing, transparent
box testing, and structural testing) is a method of testing software that tests internal
structures or workings of an application, as opposed to its functionality (i.e. black-box
testing). In white-box testing an internal perspective of the system, as well as
programming skills, are used to design test cases.
Unit testing is a software development process in which the smallest testable parts of an
application, called units, are individually and independently scrutinized for proper
operation. Unit testing is often automated but it can also be done manually. This testing
mode is a component of Extreme Programming (XP), a pragmatic method of software
development that takes a meticulous approach to building a product by means of
continual testing and revision.
Integration testing tests integration or interfaces between components, interactions to
different parts of the system such as an operating system, file system and hardware or
interfaces between systems.2.Also after integrating two different components together
we do the integration testing. As displayed in the image below when two different
modules Module A and Module B are integrated then the integration testing is done.
Functional testing is a software testing process used within software development in
which software is tested to ensure that it conforms with all requirements. Functional
testing is a way of checking software to ensure that it has all the required functionality
that's specified within its functional requirements.
System testing is most often the final test to verify that the system to be delivered
meets the specification and its purpose.System testing is carried out by specialists

testers or independent testers.System testing should investigate both functional and


non-functional requirements of the testing.
Target Testing:Regression testing is the process of testing changes to computer programs to make
sure that the older programming still works with the new changes. Regression testing is
a normal part of the program development process and, in larger companies, is done by
code testing specialists. Test department coders develop code test scenarios and
exercises that will test new units of code after they have been written. These test cases
form what becomes the test bucket.
Different Types of Testing Techniques?
There are many different types of software testing technique, each with its own
strengths and weaknesses. Each individual technique is good at finding particular types
of defect and relatively poor at finding other types.
Ex: a technique that explores the upper and lower limits of a single input range is more
likely to find boundary value defects than defects associated with combinations of
inputs. Similarly, testing performed at different stages in the software development
life cycle will find different types of defects; component testing is more likely to find
coding logic defects than system design defects.
Each testing technique falls into one of a number of different categories. Broadly
speaking there are two main categories: Static,dynamic

Static techniques are subdivided into three more categories:


specification-based (black-box, also known as behavioral techniques)

structure-based (white-box or structural techniques)

experience-based

Specification-based techniques include both:


functional
non-functional techniques
Static testing techniques
Static testing techniques do not execute the code being examined and are generally
used before any tests are executed on the software. They could be called nonexecution techniques.
However, 'static analysis' is a tool-supported type of static testing that concentrates on
testing formal languages and so is most often used to statically test source code.
Specification-based (black-box) testing techniques
These are also known as 'black-box' or input/output-driven testing techniques
because they view the software as a black-box with inputs and outputs, but they
have no knowledge of how the system or component is structured inside the box.

In essence, the tester is concentrating on what the software does, not how it
does it.
Functional testing is concerned with what the system does, its features or
functions. Non-functional testing is concerned with examining how well the
system does something, rather than what it does.

Structure-based (white-box) testing techniques


Structure-based testing techniques (which are also dynamic rather than static)
use the internal structure of the software to derive test cases. They are
commonly called 'white-box' or 'glass-box' techniques (implying you can see into
the system) since they require knowledge of how the software is implemented,
that is, how it works.

Ex: a structural technique may be concerned with exercising loops in the


software. Different test cases may be derived to exercise the loop once, twice, and
many times. This may be done regardless of the functionality of the software.
Experience-based testing techniques
In experience-based techniques, people's knowledge, skills and background
are a prime contributor to the test conditions and test cases.
The experience of both technical and business people is important, as they
bring different perspectives to the test analysis and design process.
Due to previous experience with similar systems, they may have insights into
what could go wrong, which is very useful for testing.
Where to apply the different categories of techniques
Specification-based techniques are appropriate at all levels of testing
(component testing through to acceptance testing) where a specification exists.
When performing system or acceptance testing, the requirements
specification or functional specification may form the basis of the tests.
When performing component or integration testing, a design document or low
level specification forms the basis of the tests.
Structure-based techniques can also be used at all levels of testing.
Developers use structure-based techniques in component testing and
component integration testing, especially where there is good tool support for
code coverage.
Structure-based techniques are also used in system and acceptance testing,
but the structures are different.
Ex: the coverage of menu options or major business transactions could be the
structural element in system or acceptance testing.

Explain the Followings:


Equivalence Partitioning: Equivalence partitioning (EP) is a specification-based or black-box technique.

It can be applied at any level of testing and is often a good technique to use first.

The idea behind this technique is to divide (i.e. to partition) a set of test conditions into
groups or sets that can be considered the same (i.e. the system should handle them
equivalently),

hence equivalence partitioning.Equivalence partitions are also known as equivalence


classes the two terms mean exactly the same thing.

In equivalence-partitioning technique we need to test only one condition from each


partition. This is because we are assuming that all the conditions in one partition will be
treated in the same way by the software.

If one condition in a partition works, we assume all of the conditions in that partition will
work, and so there is little point in testing any of these others.

Similarly, if one of the conditions in a partition does not work, then we assume that none
of the conditions in that partition will work so again there is little point in testing any
more in that partition.

For example, a savings account in a bank has a different rate of interest depending on the
balance in the account. In order to test the software that calculates the interest due, we
can identify the ranges of balance values that earn the different rates of interest.
For example, 3% rate of interest is given if the balance in the account is in the range of
$0 to $100, 5% rate of interest is given if the balance in the account is in the range of
$100 to $1000, and 7% rate of interest is given if the balance in the account is $1000 and
above, we would initially identify three valid equivalence partitions and one invalid
partition as shown below.

In the above example we have identified four partitions, even though the
specification mentioned only three.
This shows a very important task of the tester that is a tester should not only test
what is in the specification, but should also think about things that havent been
specified.
In this case we have thought of the situation where the balance is less than zero.
An inexperienced tester (lets call him Robbin) might have thought that a good
set of tests would be to test every $50.

That would give the following tests: $50.00, $100.00, $150.00, $200.00, $250.00,
say up to $800.00 (then Robbin would have got tired of it and thought that
enough tests had been carried out).

But look at what Robbin has tested: only two out of four partitions! So if the
system does not correctly handle a negative balance or a balance of $1000 or
more, he would not have found these defects so the naive approach is less
effective than equivalence partitioning.

At the same time, Robbin has four times more tests (16 tests versus our four
tests using equivalence partitions), so he is also much less efficient. This is why
we say that using techniques such as this makes testing both more effective and
more efficient.

Note that when we say a partition is invalid, it doesnt mean that it represents a
value that cannot be entered by a user or a value that the user isnt supposed to
enter. It just means that it is not one of the expected inputs for this particular field.
Boundary analysis:

o Boundary value analysis (BVA) is based on testing at the boundaries


between partitions.

Here we have both valid boundaries (in the valid partitions) and invalid
boundaries (in the invalid partitions).

As an example, consider a printer that has an input option of the number of


copies to be made, from 1 to 99.

To apply boundary value analysis, we will take the minimum and maximum
(boundary) values from the valid partition (1 and 99 in this case) together with the
first or last value respectively in each of the invalid partitions adjacent to the valid
partition (0 and 100 in this case).
This is called an open boundary, because one of the sides of the partition is left
open, i.e. not defined.

Decision Tables:The techniques of equivalence partitioning and boundary value analysis are often
applied to specific situations or inputs.

However, if different combinations of inputs result in different actions being taken,


this can be more difficult to show using equivalence partitioning and boundary
value analysis,
which tend to be more focused on the user interface. The other two
specification-based software testing techniques, decision tables and state
transition testing are more focused on business logic or business rules.
A decision table is a good way to deal with combinations of things (e.g. inputs).
This technique is sometimes also referred to as a cause-effect table.
The reason for this is that there is an associated logic diagramming technique
called cause-effect graphing which was sometimes used to help derive the
decision table (Myers describes this as a combinatorial logic network [Myers,
1979]).
However, most people find it more useful just to use the table described in
[Copeland, 2003].
Decision tables provide a systematic way of stating complex business rules,
which is useful for developers as well as for testers.
Decision tables can be used in test design whether or not they are used in
specifications, as they help testers explore the effects of combinations of different
inputs and other software states that must correctly implement business rules.
It helps the developers to do a better job can also lead to better relationships with
them. Testing combinations can be a challenge, as the number of combinations
can often be huge.
Testing all combinations may be impractical if not impossible. We have to be
satisfied with testing just a small subset of combinations but making the choice of
which combinations to test and which to leave out is also important.
If you do not have a systematic way of selecting combinations, an arbitrary
subset will be used and this may well result in an ineffective test effort.

Experience Based Testing or Techniques:In experience-based techniques, peoples knowledge, skills and background
are of prime importance to the test conditions and test cases.

The experience of both technical and business people is required, as they bring
different perspectives to the test analysis and design process. Because of the
previous experience with similar systems, they may have an idea as what could
go wrong, which is very useful for testing.

Experience-based techniques go together with specification-based and structurebased techniques, and are also used when there is no specification, or if the
specification is inadequate or out of date.

This may be the only type of technique used for low-risk systems, but this
approach may be particularly useful under extreme time pressure in fact this is
one of the factors leading to exploratory testing.

Error-Guessing
The Error guessing is a technique where the experienced and good testers are
encouraged to think of situations in which the software may not be able to cope.

Some people seem to be naturally good at testing and others are good testers
because they have a lot of experience either as a tester or working with a
particular system and so are able to find out its weaknesses.

This is why an error guessing approach, used after more formal techniques have
been applied to some extent, can be very effective.

It also saves a lot of time because of the assumptions and guessing made by the
experienced testers to find out the defects which otherwise wont be able to find.

The success of error guessing is very much dependent on the skill of the tester,
as good testers know where the defects are most likely to be.

This is why an error guessing approach, used after more formal techniques have
been applied to some extent, can be very effective. In using more formal
techniques, the tester is likely to gain a better understanding of the system, what
it does and how it works.

With this better understanding, he or she is likely to be better at guessing ways


in which the system may not work properly.
Exploratory testing
Exploratory testing is a hands-on approach in which testers are involved in
minimum planning and maximum test execution. The planning involves
the creation of a test charter, a short declaration of the scope of a short (1
to 2 hour) time-boxed test effort, the objectives and possible approaches
to be used.
The test design and test execution activities are performed in parallel
typically without formally documenting the test conditions, test cases or
test scripts.
This does not mean that other, more formal testing techniques will not be
used.
For example, the tester may decide to use boundary value analysis but
will think through and test the most important boundary values without
necessarily writing them down.
Some notes will be written during the exploratory-testing session, so that a
report can be produced afterwards.
Test logging is undertaken as test execution is performed, documenting
the key aspects of what is tested, any defects found and any thoughts
about possible further testing.

A key aspect of exploratory testing is learning: learning by the tester about


the software, its use, its strengths and its weaknesses.
The tester is constantly making decisions about what to test next and
where to spend the (limited) time.

Transition based Testings: State transition testing is used where some aspect of the system can be
described in what is called a finite state machine. This simply means that the
system can be in a (finite) number of different states, and the transitions from one
state to another are determined by the rules of the machine. This is the model
on which the system and the tests are based.
Any system where you get a different output for the same input, depending on
what has happened before, is a finite state system.
A finite state system is often shown as a state diagram (see Figure 4.2).
One of the advantages of the state transition technique is that the model can be
as detailed or as abstract as you need it to be.
Where a part of the system is more important (that is, requires more testing) a
greater depth of detail can be modeled.
Where the system is less important (requires less testing), the model can use a
single state to signify what would otherwise be a series of different states.
A state transition model has four basic parts:
The states that the software may occupy (open/closed or funded/insufficient
funds);
The transitions from one state to another (not all transitions are allowed);
The events that cause a transition (closing a file or withdrawing money);
The actions that result from a transition (an error message or being given your
cash).
Hence we can see that in any given state, one event can cause only one action,
but that the same event from a different state may cause a different action
and a different end state.

In deriving test cases, we may start with a typical scenario.

First test case here would be the normal situation, where the correct PIN is
entered the first time.
A second test (to visit every state) would be to enter an incorrect PIN each time,
so that the system eats the card.
A third test we can do where the PIN was incorrect the first time but OK the
second time, and another test where the PIN was correct on the third try. These
tests are probably less important than the first two.
Note that a transition does not need to change to a different state So there could
be a transition from access account which just goes back to access account for
an action such as request balance.
Traceability:Test conditions should be able to be linked back to their sources in the test basis,
this is known as traceability.
Traceability can be horizontal through all the test documentation for a given test
level (e.g. system testing, from test conditions through test cases to test scripts)
or it can be vertical through the layers of development documentation (e.g. from
requirements to components).
Now, the question may arise is that Why is traceability important? So, lets
have a look on the following examples:

The requirements for a given function or feature have changed. Some of the
fields now have different ranges that can be entered.

Which tests were looking at those boundaries? They now need to be


changed. How many tests will actually be affected by this change in the
requirements? These questions can be answered easily if the requirements can
easily be traced to the tests.

A set of tests that has run OK in the past has now started creating serious
problems. What functionality do these tests actually exercise? Traceability
between the tests and the requirement being tested enables the functions or
features affected to be identified more easily.

Before delivering a new release, we want to know whether or not we have tested
all of the specified requirements in the requirements specification. We have the
list of the tests that have passed was every requirement tested?
Importance of traceability:To Identify the apt version of test cases to be used.
To identify which test cases can be reused or need to be updated.

To assist the debugging process so that a defects found when executing tests
can be tracked back to the corresponding version of requirement.

To ensure that a particular test case is based on one more requirements


mentioned in the Requirement Doc.
To determine which test cases are written for which requirements
Also to check if test cases are written for every requirement.
If there arises any dispute over a bug in which developer says it is not a bug as
per the design, in that case the test case can be traced back to the requirement
It helps in finding out how many test cases are affected whenever there is any
change in the application

Independent Testing: The degree of independence avoids author bias and is often more effective at

finding defects and failures.


There is several level of independence which is listed here from the lowest level of
independence to the highest:
i. Tests by the person who wrote the item.
ii. Tests by another person within the same team, like another programmer.
iii.Tests by the person from some different group such as an independent test team.
iv.Tests by a person from a different organization or company, such as outsourced
testing or certification by an external body.

When we think about how independent the test team is? It is really very important
to understand that independence is not an either/or condition, but a range:
At one end of the range lies the absence of independence, where the
programmer performs testing within the programming team.
Moving toward independence, we find an integrated tester or group of testers
working alongside the programmers, but still within and reporting to the
development manager.
Then moving little bit more towards independence we might find a team of testers
who are independent and outside the development team, but reporting to project
management.
Near the other end of the continuum lies complete independence. We might see
a separate test team reporting into the organization at a point equal to the
development or project team.
We might find specialists in the business domain (such as users of the system),
specialists in technology (such as database experts), and specialists in testing
(such as security testers, certification testers, or test automation experts) in a
separate test team,
as part of a larger independent test team, or as part of a contract, outsourced test
team.

Benefits of independence testing:

An independent tester can repeatedly find out more, other, and different defects
than a tester working within a programming team or a tester who is by
profession a programmer.

While business analysts, marketing staff, designers, and programmers bring their
own assumptions to the specification and implementation of the item under test,
an independent tester brings a different set of assumptions to testing and to
reviews, which often helps in exposing the hidden defects and problems
An independent tester who reports to senior management can report his results
honestly and without any concern for reprisal that might result from pointing out
problems in coworkers or, worse yet, the managers work.
An independent test team often has a separate budget, which helps ensure the
proper level of money is spent on tester training, testing tools, test equipment,
etc.
In addition, in some organizations, testers in an independent test team may find it
easier to have a career path that leads up into more senior roles in testing.

Risks of independence and integrated testing:

There is a possibility that the testers and the test team can get isolated. This can
take the form of interpersonal isolation from the programmers, the designers, and
the project team itself
or it can take the form of isolation from the broader view of quality and the
business objectives (e.g., obsessive focus on defects, often accompanied by a
refusal to accept business prioritization of defects).
This leads to communication problems, feelings of unfriendliness and hostility.
Lack of identification with and support for the project goals, spontaneous blame
festivals and political backstabbing.
Even well-integrated test teams can suffer problems. Other project stakeholders
might come to see the independent test team rightly or wrongly as a
bottleneck and a source of delay.
Some programmers give up their responsibility for quality, saying, Well, we have
this test team now, so why do I need to unit test my code?

Test Strategies:The choice of test approaches or strategies is one powerful factor in


success of test effort & accuracy of test plan & estimates.
This factor is under control of testor & test leaders
Analytical
analytical test strategy is the requirements-based strategy, where an
analysis of the requirements specification forms the basis for planning,
estimating and designing tests. Analytical test strategies have in
common the use of some formal or informal analytical technique,
usually during the requirements and design stages of the project.
Model-based

Model-based test strategies have in common the creation or selection


of some formal or informal model for critical system behaviors, usually
during the requirements and design stages of the project.
Methodical
Methodical test strategies have in common the adherence to a preplanned, systematized approach that has been developed in-house,
assembled from various concepts developed inhouse and gathered
from outside, or adapted significantly from outside ideas and may have
an early or late point of involvement for testing.
Process or standard compliant
Process- or standard-compliant strategies have in common reliance
upon an externally developed approach to testing, often with little if
any customization and may have an early or late point of involvement
for testing.
Dynamic
Dynamic strategies, such as exploratory testing, have in common
concentrating on finding as many defects as possible during test
execution and adapting to the realities of the system under test as it is
when delivered, and they typically emphasize the later stages of
testing.
Consultative or directed
Consultative or directed strategies have in common the reliance on a
group of non-testers to guide or perform the testing effort and typically
emphasize the later stages of testing simply due to the lack of
recognition of the value of early testing.
Risk (How do you know which strategies to pick or blend for the best
chance of success? There are many factors to consider, but let us highlight a
few of the most important:)
Risk management is very important during testing, so consider the
risks and the level of risk. For a well-established application that is
evolving slowly, regression is an important risk, so regression-averse
strategies make sense.
Skill
Consider which skills your testers possess and lack because strategies
must not only be chosen, they must also be executed. . A standard
compliant strategy is a smart choice when you lack the time and skills
in your team to create your own approach.
Objectives
Testing must satisfy the needs and requirements of stakeholders to be
successful. If the objective is to find as many defects as possible with a
minimal amount of up-front time and effort invested
Regulations
Sometimes you must satisfy not only stakeholders, but also regulators.
In this case, you may need to plan a methodical test strategy that
satisfies these regulators that you have met all their requirements.

Product
Some products like, weapons systems and contract-development
software tend to have well-specified requirements. This leads to
synergy with a requirements-based analytical strategy.
Business.
Business considerations and business continuity are often important. If
you can use a legacy system as a model for a new system, you can
use a model-based strategy.
What is Incident logging Or How to log an Incident in software testing?
When we talk about incidents we mean to indicate the possibility that a
questionable behavior is not necessarily a true defect.

We log these incidents so that we can keep the record of what we observed
and can follow up the incident and track what is done to correct it.

It is most common to find incident logging or defect reporting processes and


tools in use during formal, independent test phases.

But it will be a good idea to log, report, track, and manage incidents found
during development and reviews because it gives useful information about the
early and cheaper defect detection and removal activities.

What are incident report? Or why to report the incident?


After logging the incidents that occur in the field or after deployment of the
system we also need some way of reporting, tracking, and managing them. It is
most common to find defects reported against the code or the system itself.
However, there are cases where defects are reported against requirements and
design specifications, user and operator guides and tests also.
Why to report the incidents?
There are many benefits of reporting the incidents as given below:

In some projects, a very large number of defects are found. Even on smaller
projects where 100 or fewer defects are found, it is very difficult to keep track of
all of them unless you have a process for reporting, classifying, assigning and
managing the defects from discovery to final resolution.

An incident report contains a description of the misbehavior that was observed


and classification of that misbehavior.

As with any written communication, it helps to have clear goals in mind when
writing. One common goal for such reports is to provide programmers, managers
and others with detailed information about the behavior observed and the defect.

Another is to support the analysis of trends in aggregate defect data, either for
understanding more about a particular set of problems or tests or for
understanding and reporting the overall level of system quality.

Finally, defect reports, when analyzed over a project and even across projects,
give information that can lead to development and test process improvements.

The programmers need the information in the report to find and fix the defects.
Before that happens, though, managers should review and prioritize the defects
so that scarce testing and developer resources are spent fixing and confirmation
testing the most important defects.

While many of these incidents will be user error or some other behavior not
related to a defect, some percentage of defects gets escaped from quality
assurance and testing activities.
The defect detection percentage, which compares field defects with test
defects, is an important metric of the effectiveness of the test process.
Here is an example of a DDP formula that would apply for calculating DDP for the
last level of testing prior to release to the field:

Often, it aids the effectiveness and


efficiency of reporting, tracking and managing defects when the defect-tracking
tool provides an ability to vary some of the information captured depending on
what the defect was reported against.

Test Controls
Test control is about guiding and corrective actions to try to achieve the best
possible outcome for the project. The specific guiding actions depend on what we
are trying to control. Let us take few hypothetical examples:
A portion of the software under test will be delivered late but market conditions
dictate that we cannot change the release date.
At this point of time test control might involve re-prioritizing the tests so that we
start testing against what is available now.
For cost reasons, performance testing is normally run on weekday evenings
during off-hours in the production environment.
Due to unexpected high demand for your products, the company has temporarily
adopted an evening shift that keeps the production environment in use 18 hours
a day, five days a week. In this context test control might involve rescheduling the
performance tests for the weekend.
Configure Management

Configuration management determines clearly about the items that make up the
software or system. These items include source code, test scripts, third-party
software, hardware, data and both development and test documentation.
Configuration management is also about making sure that these items are
managed carefully, thoroughly and attentively during the entire project and
product life cycle.
Configuration management has a number of important implications for testing.
Like configuration management allows the testers to manage their testware and
test results using the same configuration management mechanisms.
Configuration management also supports the build process, which is important
for delivery of a test release into the test environment.
Simply sending Zip archives by e-mail will not be sufficient, because there are
too many opportunities for such archives to become polluted with undesirable
contents or to harbor left-over previous versions of items.
Especially in later phases of testing, it is critical to have a solid, reliable way of
delivering test items that work and are the proper version.
Last but not least, configuration management allows us to keep the record of
what is being tested to the underlying files and components that make it up.
This is very important. Let us take an example, when we report defects, we need
to report them against something, something which is version controlled.
If it is not clear what we found the defect in, the programmers will have a very
tough time of finding the defect in order to fix it. For the kind of test reports
discussed earlier to have any meaning, we must be able to trace the test results
back to what exactly we tested.
Ideally, when testers receive an organized, version-controlled test release from a
change-managed source code repository, it is along with a test item trans-mittal
report or release notes.
[IEEE 829] provides a useful guideline for what goes into such a report.
Release notes are not always so formal and do not always contain all the
information shown.
Configuration management is a topic that is very complex. So, advanced
planning is very important to make this work. During the project planning stage
and perhaps as part of your own test plan make sure that configuration
management procedures and tools are selected.
As the project proceeds, the configuration process and mechanisms must be
implemented, and the key interfaces to the rest of the development process
should be documented.

What is Risk based testing?

Risk based testing is basically a testing done for the project based on
risks. Risk based testing uses risk to prioritize and emphasize the appropriate
tests during test execution.

In simple terms Risk is the probability of occurrence of an undesirable


outcome.

This outcome is also associated with an impact. Since there might not be
sufficient time to test all functionality, Risk based testing involves testing the
functionality which has the highest impact and probability of failure.

Risk-based testing is the idea that we can organize our testing efforts in a way
that reduces the residual level of product risk when the system is deployed.

Risk-based testing starts early in the project, identifying risks to system quality
and using that knowledge of risk to guide testing planning, specification,
preparation and execution.

Risk-based testing involves both mitigation testing to provide opportunities to


reduce the likelihood of defects, especially high-impact defects and
contingency testing to identify work-arounds to make the defects that do get
past us less painful.

Risk-based testing also involves measuring how well we are doing at finding and
removing defects in critical areas.

Risk-based testing can also involve using risk analysis to identify proactive
opportunities to remove or prevent defects through non-testing activities and to
help us select which test activities to perform.

The goal of risk-based testing cannot practically be a risk-free project. What we


can get from risk-based testing is to carry out the testing with best practices in
risk management to achieve a project outcome that balances risks with quality,
features, budget and schedule.
How to perform risk based testing?

Make a prioritized list of risks.

Perform testing that explores each risk.

As risks evaporate and new ones emerge, adjust your test effort to stay focused
on the current crop.

Requirements management tools


Since the tests are based on requirements, the better the quality of the
requirements, the easier it will be to write tests from them. It is equally important
to be able to trace tests to requirements and requirements to tests.
Some requirements management tools are able to find defects in the
requirements, for example by checking for ambiguous or forbidden words, such
as might, and/or, as needed or (to be decided).
Features or characteristics of requirements management tools are:

To store the requirement statements.

To store the information about requirement attributes.

To check consistency of requirements.

To identify undefined, missing or to be defined later requirements.

To prioritize requirements for testing purposes.

To trace the requirements to tests and tests to requirements, functions or


features.

To trace through all the levels of requirements.

Interfacing to test management tools.

Coverage of requirements by a set of tests (sometimes).


Incident Management Tools
Incident management tool is also known as a defect-tracking tool, a defectmanagement tool, a bug-tracking tool or a bug-management tool.

However,incident management tool is perhaps a better name for it because


not all of the things tracked are actually defects or bugs;
Features or characteristics of incident management tools are:

To store the information about the attributes of incidents (e.g. severity).

To store attachments (e.g. a screen shot).

To prioritize incidents.

To assign actions to people (fix, confirmation test, etc.).

status (e.g. open, rejected, duplicate, deferred, ready for confirmation test,
closed);

To report the statistics/metrics about incidents (e.g. average time open,


number of incidents with each status, total number raised, open or closed).

Incident management tool functionality may be included in commercial test


management tools.
Configuration Management Tools
Configuration management tools are not strictly testing tools either, but good
configuration management is critical for controlled testing.
To understand this better let us take an example, a test group started testing the
software, expecting to find the usual quite high number of problems. But to their
surprise, the software seemed to be much better than usual this time very few
defects were found.
Features or characteristics of configuration management tools are:

To store information about versions and builds of the software and testware.

Traceability between software and testware and different versions or variants.

To keep track of which versions belong with which configurations (e.g. operating
systems, libraries, browsers).

To build and release management.

Baselining (e.g. all the configuration items that make up a specific release).

Access control (checking in and out).

IEEE 829 Standard Test Log Template


Test Log Identifier
Some type of unique company generated number to identify this test log, its level
and the level of software that it is related to. Preferably the test log level will be the
same as the related test case or procedure level. Ideally the test log naming
convention should follow the same general rules as the testware it is related to. This
is to assist in coordinating software and testware versions within configuration
management.
Unique "short" name for the log
Version date and version number of log
Description
Items being tested and any supporting reference materials
Case Specification
Procedure specification
Transmittal report
Date & time
Executed by
Tester
Observer
Environment
Especially any variances from the planned test environment
Activity and Event Entries
Date/time
Beginning of each significant activity
End of each activity
Execution description
Procedure executed (reference to its location)
Personnel present at test
Tester, developer, observer
Reason for each persons presence
Procedure results
For each execution, log all relevant information
Error messages, aborts, interventions
Location of outputs
Result status (success, failure, unknown)
Environmental information
Any changes or substitutions from requested environment
Anomalous Events
Record events before and after an anomaly
Any special situations
Power interruptions, etc.
Incident reports
Report on each unexpected result or variance from expected output

IEEE 829 Standard Test Plan Template

Test plan identifier

Test deliverables

Introduction Test tasks

Test items Environmental needs

Features to be tested Responsibilities

Features not to be tested Staffing and training needs

Approach Schedule

Item pass/fail criteria Risks and contingencies

Suspension and resumption criteria Approvals


What are the advantages or benefits of using testing tools?
Reduction of repetitive work: Repetitive work is very boring if it is done
manually. People tend to make mistakes when doing the same task over and
over. Examples of this type of repetitive work include running regression tests,
entering the same test data again and again (can be done by a test
execution tool), checking against coding standards (which can be done by a
static analysis tool) or creating a specific test database (which can be done by a
test data preparation tool).

Greater consistency and repeatability: People have tendency to do the same


task in a slightly different way even when they think they are repeating something
exactly. A tool will exactly reproduce what it did before, so each time it is run the
result is consistent.

Objective assessment: If a person calculates a value from the software or


incident reports, by mistake they may omit something, or their own one-sided
preconceived judgments or convictions may lead them to interpret that data
incorrectly. Using a tool means that subjective preconceived notion is removed
and the assessment is more repeatable and consistently calculated. Examples
include assessing the cyclomatic complexity or nesting levels of a component
(which can be done by a static analysis tool), coverage (coverage measurement
tool), system behavior (monitoring tools) and incident statistics (test management
tool).

Ease of access to information about tests or testing: Information presented


visually is much easier for the human mind to understand and interpret. For
example, a chart or graph is a better way to show information than a long list of
numbers this is why charts and graphs in spreadsheets are so useful. Special
purpose tools give these features directly for the information they process.
Examples include statistics and graphs about test progress (test execution or test

management tool), incident rates (incidentmanagement or test management tool)


and performance (performancetesting tool).

What are the important factors for the software testing tool selection?

While introducing the tool in the organization it must match a need within the
organization, and solve that need in a way that is both effective and efficient.
The tool should help in building the strengths of the organization and should also
address its weaknesses.
The organization needs to be ready for the changes that will come along with the
new tool. If the current testing practices are not good enough and the
organization is not mature, then it is always recommended to improve testing
practices first rather than to try to find tools to support poor practices.
Automating chaos just gives faster chaos!
The following factors are important during tool selection:
Assessment of the organizations maturity (e.g. readiness for change);
Identification of the areas within the organization where tool support will help to
improve testing processes;

Evaluation of tools against clear requirements and objective criteria;

Proof-of-concept to see whether the product works as desired and meets the
requirements and objectives defined for it;

Evaluation of the vendor (training, support and other commercial aspects) or


open-source network of support;

Identifying and planning internal implementation (including coaching and


mentoring for those new to the use of the tool).

What is a proof-of-concept or piloting phase for tool evaluation in software


testing?
One of the ways to do a proof-of-concept is to have a pilot project as the first
thing done with a new tool. This will use the tool on a small scale, with sufficient
time to explore different ways in which it can be used.
Objectives should be set for the pilot in order to accomplish what is needed
within the current organizational context.
A pilot tool project expected to have issues or problems they should be solved
in ways that can be used by everyone later on.
The pilot project should experiment with different ways of using the tool.

For example, different settings for a static analysis tool, different reports from a
test management tool, different scripting and comparison techniques for a test
execution tool or different load profiles for a performance-testing tool.
The objectives for a pilot project for a new tool are:

To learn more about the tool and in detail.

To see how the tool would fit with existing processes or documentation, how
those would need to change to work well with the tool and how to use the tool to
streamline existing processes;

To decide on standard ways of using the tool that will work for all potential users
(e.g. naming conventions, creation of libraries, defining modularity, where
different elements will be stored, how they and the tool itself will be maintained);

To evaluate the pilot project against its objectives (have the benefits been
achieved at reasonable cost?).

What are the risks or disadvantages of using the testing tools?


Although there are many benefits that can be achieved by using tools to support
testing activities, but there are also many risks that are associated with it when
tool support for testing is introduced and used. Risks include:
Unrealistic expectations from the tool: Unrealistic expectations may be one of
the greatest risks to success with tools. The tools are just software and we all
know that there are many problems associated with any kind of software. It is
very important to have clear and realistic objectives for what the tool can do.

People often make mistakes by underestimating the time, cost and effort
for the initial introduction of a tool: Introducing something new into an
organization is hardly straightforward. Once you purchase a tool, you want to
have a number of people being able to use the tool in a way that will be
beneficial.

There will be some technical issues to overcome, but there will also be
resistance from other people both need to be handled in such a way that the
tool will be of great success.

People frequently miscalculate the time and effort needed to achieve


significant and continuing benefits from the tool: Mostly in the initial phase
when the tool is new to the people, they miscalculate the time and effort needed
to achieve significant and continuing benefits from the tool.

Just think back to the last time you tried something new for the very first time
(learning to drive, riding a bike, skiing). Your first attempts were unlikely to be
very good but with more experience and practice you became much better.

Mostly people underestimate the effort required to maintain the test assets
generated by the tool: Generally people underestimate the effort required to
maintain the test assets generated by the tool. Because of the insufficient
planning for maintenance of the assets that the tool produces there are chances
that the tool might end up as shelf-ware, along with the previously listed risks.

People depend on the tool a lot (over-reliance on the tool): Since there are
many benefits that can be gained by using tools to support testing like reduction
of repetitive work, greater consistency and repeatability, etc.

people started to depend on the tool a lot. But the tools are just a software they
can do only what they have been designed to do (at least a good quality tool
can), but they cannot do everything.

What is Dynamic analysis tools in software testing?


Dynamic analysis tools are dynamic because they require the code to be in
arunning state. They are analysis rather than testing tools because they
analyze what is happening behind the scenes that is in the code while the
software is running (whether being executed with test cases or being used in
operation).
Let us take an example of a car to understand it in a better way. If you go to a
showroom of a car to buy it, you might sit in the car to see if is comfortable and
see what sound the doors make this would be static analysis because the car
is not being driven.
If you take a test drive, then you would check that how the car performs when it
is in the running mode e.g. the car turns right when you turn the steering wheel
clockwise or when you press the break then how the car will respond and can
also check the oil pressure or the brake fluid, this would be dynamic analysis, it
can only be done while the engine is running.
Features or characteristics of dynamic analysis tools are as follows:
To detect memory leaks;
To identify pointer arithmetic errors such as null pointers;
To identify time dependencies.

Eventually when your computers response time gets slower and slower, but it
get improved after re-booting, this may be because of the memory leak, where
the programs do not correctly release blocks of memory back to the operating
system.

Sooner or later the system will run out of memory completely and stop. Hence,
rebooting restores all of the memory that was lost, so the performance of the
system is now restored to its normal state.

These tools would typically be used by developers in component testing and


component integration testing, e.g. when testing middleware, when testing
security or when looking for robustness defects.

Another form of dynamic analysis for websites is to check whether each link does
actually link to something else (this type of tool may be called a web spider).
The tool does not know if you have linked to the correct page, but at least it can
find dead links, which may be helpful.

What is Validation in software testing? or What is software validation?


Determining if the system complies with the requirements and performs functions
for which it is intended and meets the organizations goals and user needs.

Validation is done at the end of the development process and takes place after
verifications are completed.

It answers the question like: Am I building the right product?

Am I accessing the right data (in terms of the data required to satisfy the
requirement).

It is a High level activity.

Performed after a work product is produced against established criteria ensuring


that the product integrates correctly into the environment.

Determination of correctness of the final software product by a development


project with respect to the user needs and requirements.

According to the Capability Maturity Model(CMMI-SW v1.1) we can also define


validation as The process of evaluating software during or at the end of the
development process to determine whether it satisfies specified requirements. [IEEESTD-610].
Advantages of Validation:

1. During verification if some defects are missed then during validation process it
can be caught as failures.
2. If during verification some specification is misunderstood and development had
happened then during validation process while executing that functionality the
difference between the actual result and expected result can be understood.
3. Validation is done during testing like feature testing, integration testing, system
testing, load testing, compatibility testing, stress testing, etc.
4. Validation helps in building the right product as per the customers requirement
and helps in satisfying their needs.

Validation is basically done by the testers during the testing. While validating the
product if some deviation is found in the actual result from the expected result
then a bug is reported or an incident is raised.
Not all incidents are bugs. But all bugs are incidents. Incidents can also be of
type Question where the functionality is not clear to the tester.
Hence, validation helps in unfolding the exact functionality of the features and
helps the testers to understand the product in much better way. It helps in making
the product more user friendly.

What is Verification in software testing? or What is software verification?


It makes sure that the product is designed to deliver all functionality to the
customer.

Verification is done at the starting of the development process. It includes


reviews and meetings, walkthroughs, inspection, etc. to evaluate documents,
plans, code, requirements and specifications.

Suppose you are building a table. Here the verification is about checking all the
parts of the table, whether all the four legs are of correct size or not.

If one leg of table is not of the right size it will imbalance the end product.Similar
behavior is also noticed in case of the software product or application.

If any feature of software product or application is not up to the mark or if any


defect is found then it will result into the failure of the end product.

Hence, verification is very important. It takes place at the starting of the


development process.

It answers the questions like: Am I building the product right?

Am I accessing the data right (in the right place; in the right way).

It is a Low level activity

Performed during development on key artifacts, like walkthroughs, reviews and


inspections, mentor feedback, training, checklists and standards.

Demonstration of consistency, completeness, and correctness of the software at


each stage and between each stage of the development life cycle.

Advantages of Software Verification :


1. Verification helps in lowering down the count of the defect in the later stages of
development.
2. Verifying the product at the starting phase of the development will help in
understanding the product in a better way.
3. It reduces the chances of failures in the software application or product.
4. It helps in building the product as per the customer specifications and needs.
What are the uses of Static Testing?
The uses of static testing are as follows:
Since static testing can start early in the life cycle so early feedback on quality
issues can be established.

As the defects are getting detected at an early stage so the rework cost most
often relatively low.

Development productivity is likely to increase because of the less rework effort.

Types of the defects that are easier to find during the static testing are:
deviation from standards, missing requirements, design defects, nonmaintainable code and inconsistent interface specifications.

Static tests contribute to the increased awareness of quality issues.

What is test coverage in software testing? Its advantages and disadvantages


Test coverage measures the amount of testing performed by a set of test.
Wherever we can count things and can tell whether or not each of those things
has been tested by some test, then we can measure coverage and is known as
test coverage.
The basic coverage measure is where the coverage item is whatever we have
been able to count and see whether a test has exercised or used this item.

There is danger in using a coverage measure. But, 100% coverage


does not mean 100% tested. Coverage techniques measure only one dimension
of a multi-dimensional concept.
Two different test cases may achieve exactly the same coverage but the input
data of one may find an error that the input data of the other doesnt.

Benefit of code coverage measurement:

It creates additional test cases to increase coverage

It helps in finding areas of a program not exercised by a set of test cases

It helps in determining a quantitative measure of code coverage, which indirectly


measure the quality of the application or product.

Drawback of code coverage measurement:

One drawback of code coverage measurement is that it measures coverage of


what has been written, i.e. the code itself; it cannot say anything about the
software that has not been written.

If a specified function has not been implemented or a function was omitted from
the specification, then structure-based techniques cannot say anything about
them it only looks at a structure which is already there.

What is smoke testing? When to use it? Advantages and Disadvantages


Smoke testing is a type of software testing which ensures that the major
functionalities of the application are working fine. This testing is also known as
Build Verification testing
It is a non-exhaustive testing with very limited test cases to ensure that the
important features are working fine and we are good to proceed with the detailed
testing.
The term smoke testing is originated from the hardware testing, where a
device when first switched on is tested for the smoke or fire from its components.
This ensures that the hardwares basic components are working fine and no
serious failures are found.
Similarly, when we do smoke testing of an application then this means that we
are trying to ensure that there should NOT be any major failures before giving the
build for exhaustive testing.

The purpose of the smoke testing is to ensure that the critical functionalities of an
application are working fine.

This is a non-exhaustive testing with very limited number of test cases.

It is also known as Build verification testing where the build is verified by testing
the important features of the application and then declaring it as good to go for
further detailed testing.

Smoke testing can be done by developers before releasing the build to the
testers and post this it is also tested by the testing team to ensure that the build
is stable enough to perform the detailed testing.

Usually smoke testing is performed with positive scenarios and with valid data.

It is a type of shallow and wide testing because it covers all the basic and
important functionalities of an application.

Usually the smoke testing is documented.

Smoke testing is like a normal health check up of the build of an application.

Examples for Smoke Testing


Let us assume that there is an application like Student Network which has 15
modules. Among them, there are 4 important components like Login page,
Adding student details, Updating it and Deleting it.
As a part of smoke testing we will test the login page with valid input. After login
we will test the addition, updating and deletion of records.

If all the 4 critical components work fine then the build is stable enough to
proceed with detailed testing.
This is known as Smoke testing.
When to use smoke testing
Smoke testing is used in the following scenarios:

It is done by developers before giving build to the testing team.

It is done by the testers before they start the detailed testing.

Smoke testing is done to ensure that the basic functionalities of the application
are working fine.

Advantages of Smoke testing

It helps in finding the bugs in the early stage of testing.

It helps in finding the issues that got introduced by the integration of components.

It helps in verifying the issues fixed in the previous build are NOT impacting the
major functionalities of the application.

Very limited number of test cases is required to do the smoke testing.

Smoke testing can be carried out in small time.

Disadvantages of Smoke testing

Smoke testing does not cover the detailed testing.

Its a non-exhaustive testing with small number of test cases because of which
we not are able to find the other critical issues.

Smoke testing is not performed with negative scenarios and with invalid data.

Difference between Retesting and Regression Testing.


Regression testing
Retesting
Regression testing is done to find out the Retesting is done to confirm whether the
issues which may get introduced because failed test cases in the final execution are

of any change or modification in the

working fine or not after the issues have

application.
The purpose of regression testing is that

been fixed.
The purpose of retesting is to ensure that

any new change in the application should


the particular bug or issue is resolved and
NOT introduce any new bug in existing
the functionality is working as expected.
functionality.
Verification of bugs are not included in

Verification of bugs are included in the

the regression testing.


retesting.
Regression testing can be done in parallel Retesting is of high priority so its done
with retesting.
For regression testing test cases can be

before the regression testing.


For retesting the test cases cannot be

automated.
In case of regression testing the testing

automated.
In case of retesting the testing is done in a

style is generic
planned way.
During regression testing even the passed During retesting only failed test cases
test cases are executed.
are re-executed.
Regression testing is carried out to check Retesting is carried out to ensure that the
for unexpected side effects.
original issue is working as expected.
Regression testing is done only when any
Retesting is executed in the same
new feature is implemented or any
environment with same data but in new
modification or enhancement has been
build.
done to the code.
Test cases of regression testing can be
Test cases of retesting can be obtained
obtained from the specification documents
only when the testing starts.
and bug reports.
What are Security testing tools in software testing?
Security testing tools can be used to test security of the system by trying to
break it or by hacking it. The attacks may focus on the network, the support
software, the application code or the underlying database.

Features or characteristics of security testing tools are:

To identify viruses;

To detect intrusions such as denial of service attacks;

To simulate various types of external attacks;

Probing for open ports or other externally visible points of attack;

To identify weaknesses in password files and passwords;

To do the security checks during operation, e.g. for checking integrity of files, and
intrusion detection, e.g. checking results of test attacks.

What is Volume testing in software testing?


It is a type of non-functional testing.

Volume testing refers to testing a software application or the product with a


certain amount of data.

E.g., if we want to volume test our application with a specific database size, we
need to expand our database to that size and then test the applications
performance on it.

Volume testing is a term given and described in Glenford Myers The Art
ofSoftware Testing, 1979. Heres his definition: Subjecting the program to
heavy volumes of data. The purpose of volume testing is to show that the
program cannot handle the volume of data specified in its objectives p.
113.

The purpose of volume testing is to determine system performance with


increasing volumes of data in the database.

What is Stress testing in software testing?


It is a type of non-functional testing.

It involves testing beyond normal operational capacity, often to a breaking point,


in order to observe the results.

It is a form of software testing that is used to determine the stability of a given


system.

It put greater emphasis on robustness, availability, and error handling under a


heavy load, rather than on what would be considered correct behavior under
normal circumstances.

The goals of such tests may be to ensure the software does not crash in
conditions of insufficient computational resources (such as memory or disk
space).

What is Load testing in software testing?


Load testing is a type of non-functional testing.

A load test is type of software testing which is conducted to understand the


behavior of the application under a specific expected load.

Load testing is performed to determine a systems behavior under both normal


and at peak conditions.

It helps to identify the maximum operating capacity of an application as well as


any bottlenecks and determine which element is causing degradation. E.g. If the
number of users are increased then how much CPU, memory will be consumed,
what is the network and bandwidth response time.

Load testing can be done under controlled lab conditions to compare the
capabilities of different systems or to accurately measure the capabilities of a
single system.

Load testing involves simulating real-life user load for the target application. It
helps you determine how your application behaves when multiple users hits it
simultaneously.

Load testing differs from stress testing, which evaluates the extent to which a
system keeps working when subjected to extreme work loads or when some of
its hardware or software has been compromised.

The primary goal of load testing is to define the maximum amount of work a
system can handle without significant performance degradation.

Examples of load testing include:


o Downloading a series of large files from the internet.
o Running multiple applications on a computer or server simultaneously.
o Assigning many jobs to a printer in a queue.

o Subjecting a server to a large amount of traffic.


o Writing and reading data to and from a hard disk continuously.
What is Waterfall model- advantages, disadvantages and when to use it?
The Waterfall Model was first Process Model to be introduced. It is also referred
to as a linear-sequential life cycle model.
It is very simple to understand and use. In a waterfall model, each phase must
be completed fully before the next phase can begin. This type of model is
basically used for the for the project which is small and there are no uncertain
requirements.
At the end of each phase, a review takes place to determine if the project is on
the right path and whether or not to continue or discard the project.
In this model the testing starts only after the development is complete.
In waterfall model phases do not overlap.

Advantages of waterfall model:

This model is simple and easy to understand and use.

It is easy to manage due to the rigidity of the model each phase has specific
deliverables and a review process.

In this model phases are processed and completed one at a time. Phases do not
overlap.

Waterfall model works well for smaller projects where requirements are very well
understood.

Disadvantages of waterfall model:

Once an application is in the testing stage, it is very difficult to go back and


change something that was not well-thought out in the concept stage.

No working software is produced until late during the life cycle.

High amounts of risk and uncertainty.

Not a good model for complex and object-oriented projects.

Poor model for long and ongoing projects.

Not suitable for the projects where requirements are at a moderate to high risk of
changing.

When to use the waterfall model:

This model is used only when the requirements are very well known, clear and
fixed.

Product definition is stable.

Technology is understood.

There are no ambiguous requirements

Ample resources with required expertise are available freely

The project is short.

What is Incremental model- advantages, disadvantages and when to use it?

In incremental model the whole requirement is divided into various builds.


Multiple development cycles take place here, making the life cycle a multiwaterfall cycle.

Cycles are divided up into smaller, more easily managed modules. Each module
passes through the requirements, design, implementation and testing phases.

A working version of software is produced during the first module, so you have
working software early on during thesoftware life cycle. Each subsequent
release of the module adds function to the previous release.

The process continues till the complete system is achieved.

In the diagram above when we work incrementally we are adding piece by piece
but expect that each piece is fully finished. Thus keep on adding the pieces until
its complete.
As in the image above a person has thought of the application. Then he started
building it and in the first iteration the first module of the application or product is
totally ready and can be demoed to the customers.
Likewise in the second iteration the other module is ready and integrated with the
first module. Similarly, in the third iteration the whole product is ready and
integrated. Hence, the product got ready step by step.

Advantages of Incremental model:

Generates working software quickly and early during the software life cycle.

This model is more flexible less costly to change scope and requirements.

It is easier to test and debug during a smaller iteration.

In this model customer can respond to each built.

Lowers initial delivery cost.

Easier to manage risk because risky pieces are identified and handled during itd
iteration.

Disadvantages of Incremental model:

Needs good planning and design.

Needs a clear and complete definition of the whole system before it can be
broken down and built incrementally.

Total cost is higher than waterfall.

What is Spiral model- advantages, disadvantages and when to use it

The spiral model is similar to the incremental model, with more emphasis
placed on risk analysis. The spiral model has four phases: Planning, Risk
Analysis, Engineering and Evaluation.

A software project repeatedly passes through these phases in iterations (called


Spirals in this model). The baseline spiral, starting in the planning phase,
requirements are gathered and risk is assessed.

Each subsequent spirals builds on the baseline spiral. Planning


Phase: Requirements are gathered during the planning phase. Requirements
like BRS that is Bussiness Requirement Specifications and SRS that is
System Requirement specifications.

Risk Analysis: In the risk analysis phase, a process is undertaken to identify


risk and alternate solutions. A prototype is produced at the end of the risk
analysis phase. If any risk is found during the risk analysis then alternate
solutions are suggested and implemented.

Engineering Phase: In this phase software is developed, along with testing at


the end of the phase. Hence in this phase the development and testing is done.

Evaluation phase: This phase allows the customer to evaluate the output of the
project to date before the project continues to the next spiral.

Diagram of Spiral model:

Advantages of Spiral model:

High amount of risk analysis hence, avoidance of Risk is enhanced.

Good for large and mission-critical projects.

Strong approval and documentation control.

Additional Functionality can be added at a later date.

Software is produced early in the software life cycle.

Disadvantages of Spiral model:

Can be a costly model to use.

Risk analysis requires highly specific expertise.

Projects success is highly dependent on the risk analysis phase.

Doesnt work well for smaller projects.

What is RAD model- advantages, disadvantages and when to use it?


RAD model is Rapid Application Development model. It is a type of incremental
model.
In RAD model the components or functions are developed in parallel as if they
were mini projects. The developments are time boxed, delivered and then
assembled into a working prototype.
This can quickly give the customer something to see and use and to provide
feedback regarding the delivery and their requirements.

The phases in the rapid application development (RAD) model are:


Business modeling: The information flow is identified between various business
functions.
Data modeling: Information gathered from business modeling is used to define data
objects that are needed for the business.
Process modeling: Data objects defined in data modeling are converted to achieve the
business information flow to achieve some specific business objective. Description are
identified and created for CRUD of data objects.
Application generation: Automated tools are used to convert process models into

code and the actual system.


Testing and turnover: Test new components and all the interfaces.
Advantages of the RAD model:

Reduced development time.

Increases reusability of components

Quick initial reviews occur

Encourages customer feedback

Integration from very beginning solves a lot of integration issues.

Disadvantages of RAD model:

Depends on strong team and individual performances for identifying business


requirements.

Only system that can be modularized can be built using RAD

Requires highly skilled developers/designers.

High dependency on modeling skills

Inapplicable to cheaper projects as cost of modeling and automated code


generation is very high.

What is Prototype model- advantages, disadvantages and when to use it?


The prototype are usually not complete systems and many of the details are not
built in the prototype. The goal is to provide a system with overall functionality.

Advantages of Prototype model:

Users are actively involved in the development

Since in this methodology a working model of the system is provided, the users
get a better understanding of the system being developed.

Errors can be detected much earlier.

Quicker user feedback is available leading to better solutions.

Missing functionality can be identified easily

Confusing or difficult functions can be identified


Requirements validation, Quick implementation of, incomplete, but
functional, application.

Disadvantages of Prototype model:

Leads to implementing and then repairing way of building systems.

Practically, this methodology may increase the complexity of the system as


scope of the system may expand beyond original plans.

Incomplete application may cause application not to be used as the


full system was designed
Incomplete or inadequate problem analysis.

When to use Prototype model:

Prototype model should be used when the desired system needs to have a lot of
interaction with the end users.

Typically, online systems, web interfaces have a very high amount of interaction
with end users, are best suited for Prototype model. It might take a while for a
system to be built that allows ease of use and needs minimal training for the end
user.

Prototyping ensures that the end users constantly work with the system and
provide a feedback which is incorporated in the prototype to result in a useable
system. They are excellent for designing good human computer interface
systems.

What is Baseline testing in software?


It is one of the type of non-functional testing.

It refers to the validation of documents and specifications on which test cases


would be designed. The requirement specification validation is baseline testing.

Generally a baseline is defined as a line that forms the base for any construction
or for measurement, comparisons or calculations.

Baseline testing also helps a great deal in solving most of the problems that are
discovered. A majority of the issues are solved through baseline testing.

What is documentation testing in software testing?


It is a type of non-functional testing.

Any written or pictorial information describing, defining, specifying, reporting, or


certifying activities, requirements, procedures, or results. Documentation is as
important to a products success as the product itself. If the documentation is
poor, non-existent, or wrong, it reflects on the quality of the product and the
vendor.

As per the IEEE Documentation describing plans for, or results of, the testing of a
system or component, Types include test case specification, test incident report,
test log, test plan, test procedure, test report. Hence the testing of all the above
mentioned documents is known as documentation testing.

This is one of the most cost effective approaches to testing. If the documentation
is not right: there will be major and costly problems.

The documentation can be tested in a number of different ways to many different


degrees of complexity.

These range from running the documents through a spelling and grammar
checking device, to manually reviewing the documentation to remove any
ambiguity or inconsistency.

Documentation testing can start at the very beginning of the software process
and hence save large amounts of money, since the earlier a defect is found the
less it will cost to be fixed.

What is Efficiency testing in software?


Efficiency testing test the amount of code and testing resources required by a program
to perform a particular function. Software Test Efficiency is number of test cases
executed divided by unit of time (generally per hour).
It is internal in the organization how much resources were consumed how much of
these resources were utilized.
Here are some formulas to calculate Software Test Efficiency (for different factors):

Test efficiency = (total number of defects found in unit+integration+system) /


(total number of defects found in unit+integration+system+User acceptance
testing)

Testing Efficiency = (No. of defects Resolved / Total No. of Defects Submitted)*


100

Software Test Effectiveness covers three aspects:

How much the customers requirements are satisfied by the system.


How well the customer specifications are achieved by the system.
How much effort is put in developing the system.

What is Big Bang integration testing?


In Big Bang integration testing all components or modules are integrated
simultaneously, after which everything is tested as a whole.

In this approach individual modules are not integrated until and unless all the
modules are ready.

In Big Bang integration testing all the modules are integrated without performing
any integration testing and then its executed to know whether all the integrated
modules are working fine or not.

This approach is generally executed by those developers who follows the Run it
and see approach.

Because of integrating everything at one time if any failures occurs then it


become very difficult for the programmers to know the root cause of that failure.

In case any bug arises then the developers has to detach the integrated modules
in order to find the actual cause of the bug.

Suppose a system consists of four modules as displayed in the diagram above. In big
bang integration all the four modules Module A, Module B, Module C and Module D are
integrated simultaneously and then the testing is performed. Hence in this approach no
individual integration testing is performed because of which the chances of critical
failures increases.
Advantage of Big Bang Integration:

Big Bang testing has the advantage that everything is finished before integration
testing starts.

Disadvantages of Big Bang Integration:

The major disadvantage is that in general it is very time consuming

It is very difficult to trace the cause of failures because of this late integration.

The chances of having critical failures are more because of integrating all the
components together at same time.

If any bug is found then it is very difficult to detach all the modules in order to find
out the root cause of it.

There is high probability of occurrence of the critical bugs in the production


environment

From where do defects and failures in software testing arise?


Errors in the specification, design and implementation of the software and
system

Errors in use of the system

Environmental conditions

Intentional damage

Potential consequences of earlier errors

Errors in the specification and design of the software:

Specification is basically a written document which describes the functional and


non functional aspects of the software by using prose and pictures. For testing
specifications there is no need of having code.

Without having code we can test the specifications. About 55% of all the bugs
present in the product are because of the mistakes present in the specification.
Hence testing the specifications can lots of time and the cost in future or in later
stages of the product.

Errors in use of the system:

Errors in use of the system or product or application may arise because of the
following reasons:

Inadequate knowledge of the product or the software to the tester. The


tester may not be aware of the functionalities of the product and hence while
testing the product there might be some defects or failures.

Environmental conditions:

Because of the wrong setup of the testing environment testers may report the
defects or failures. As per the recent surveys it has been observed that about
40% of the testers time is consumed because of the environment issues and this
has a great impact on quality and productivity. Hence proper test environments
are required for quality and on time delivery of the product to the customers.

Intentional damage:

The defects and failures reported by the testers while testing the product or the
application may arise because of the intentional damage.

Potential consequences of earlier errors:

Errors found in the earlier stages of the development reduce our cost of
production. Hence its very important to find the error at the earlier stage. This
could be done by reviewing the specification documents or by walkthrough. The
downward flow of the defect will increase the cost of production.

What is Non-functional testing (Testing of software product characteristics)?


Non-functional refers to aspects of the software that may not be related to a specific
function or user action such as scalability or security
Reliability testing: Reliability Testing is about exercising an
application so that failures are discovered and removed before the
system is deployed. The purpose of reliability testing is to
determine product reliability, and to determine whether the software
meets the customers reliability requirements.

Usability testing: In usability testing basically the testers tests the


ease with which the user interfaces can be used. It tests that
whether the application or the product built is user-friendly or not.

Usability testing includes the following five components:

Learnability: How easy is it for users to accomplish basic tasks the


first time they encounter the design?

Efficiency: How fast can experienced users accomplish tasks?

Memorability: When users return to the design after a period of not


using it, does the user remember enough to use it effectively the
next time, or does the user have to start over again learning
everything?

Errors: How many errors do users make, how severe are these
errors and how easily can they recover from the errors?

Satisfaction: How much does the user like using the system?

Efficiency testing: Efficiency testing test the amount of code and testing
resources required by a program to perform a particular function. Software
Test Efficiency is number of test cases executed divided by unit of time
(generally per hour).

Maintainability testing: It basically defines that how easy it is to maintain


the system. This means that how easy it is to analyze, change and test the
application or product.

Portability testing: It refers to the process of testing the ease with which
a computer software component or application can be moved from one
environment to another, e.g. moving of any application from Windows
2000 to Windows XP.

Baseline testing: It refers to the validation of documents and


specifications on which test cases would be designed. The requirement
specification validation is baseline testing.

Compliance testing: It is related with the IT standards followed by the


company and it is the testing done to find the deviations from the company
prescribed standards.

Documentation testing: As per the IEEE Documentation describing


plans for, or results of, the testing of a system or component, Types
include test case specification, test incident report, test log, test plan, test
procedure, test report. Hence the testing of all the above mentioned
documents is known as documentation testing.

Endurance testing: Endurance testing involves testing a system with a


significant load extended over a significant period of time, to discover how
the system behaves under sustained use. For example, in software
testing, a system may behave exactly as expected when tested for 1 hour
but when the same system is tested for 3 hours, problems such as
memory leaks cause the system to fail or behave randomly.

Load testing: A load test is usually conducted to understand the behavior


of the application under a specific expected load. Load testing is
performed to determine a systems behavior under both normal and at
peak conditions. E.g. If the number of users are in creased then how much
CPU, memory will be consumed, what is the network and bandwidth
response time

Performance testing: Performance testing is testing that is performed, to


determine how fast some aspect of a system performs under a particular
workload. It can serve different purposes like it can demonstrate that the
system meets performance criteria. Compatibility testing: Compatibility
testing is basically the testing of the application or the product built with
the computing environment. It tests whether the application or the
software product built is compatible with the hardware, operating system,
database or other system software or not.

Security testing: Security testing is basically to check that whether the


application or the product is secured or not. Can anyone came tomorrow
and hack the system or login the application without any authorization. It is
a process to determine that an information system protects data and
maintains functionality as intended.

Scalability testing: It is the testing of a software application for


measuring its capability to scale up in terms of any of its non-functional
capability like load supported, the number of transactions, the data volume
etc.

Volume testing: Volume testing refers to testing a software application or


the product with a certain amount of data.

Stress testing: It involves testing beyond normal operational capacity,


often to a breaking point, in order to observe the results. It is a form of
testing that is used to determine the stability of a given system.

Recovery testing: Recovery testing is done in order to check how fast


and better the application can recover after it has gone through any type
of crash or hardware failure etc. Recovery testing is the forced failure of
the software in a variety of ways to verify that recovery is properly
performed.

Internationalization testing and Localization


testing:Internationalization is a process of designing a software
application so that it can be adapted to various languages and regions
without any changes. Whereas Localization is a process of adapting
internationalized software for a specific region or language by adding local
specific components and translating text.

Das könnte Ihnen auch gefallen