You are on page 1of 47

TESTING FUNDAMENTALS

JUNE 24, 2016


MOBILIYA TECHNOLOGIES
VIJAY.PATIL@MOBILIYA.COM, SANOTSH.PANDA@MOBILIYA.COM
Contents
About the Tutorial:.................................................................................................................................. 3
Audience: ................................................................................................................................................ 3
Prerequisites: .......................................................................................................................................... 3
Overview: ................................................................................................................................................ 3
What is testing? .................................................................................................................................. 3
Who does Testing?.............................................................................................................................. 3
When to Start Testing? ....................................................................................................................... 4
When to Stop Testing? ........................................................................................................................ 4
Why testing is necessary: .................................................................................................................... 4
Principles of Testing: ............................................................................................................................... 5
Software Testing objectives and Purpose: .............................................................................................. 6
Bug/Defect in Software Testing: ............................................................................................................. 6
Defect Life Cycle ...................................................................................................................................... 7
Defect Report or Bug Report: ........................................................................................................... 10
Test Process: ......................................................................................................................................... 11
Planning and Control: ....................................................................................................................... 11
Analysis and Design:.......................................................................................................................... 12
Implementation and Execution: ....................................................................................................... 12
Evaluating Exit criteria and Reporting: ............................................................................................. 13
Test Closure activities: ...................................................................................................................... 13
Software Development Life Cycle Model or Methodologies ................................................................ 13
Waterfall Model ................................................................................................................................ 14
Agile Model: ...................................................................................................................................... 16
V-Model Method: ............................................................................................................................. 17
Verification & Validation ....................................................................................................................... 19
QA, QC, AND TESTING ........................................................................................................................... 20
Audit and Inspection: ........................................................................................................................ 20
Testing and Debugging:..................................................................................................................... 21
TYPES OF TESTING ................................................................................................................................. 21
Manual Testing.................................................................................................................................. 21
Automation Testing .......................................................................................................................... 21
What to Automate? ...................................................................................................................... 21
When to Automate? ..................................................................................................................... 21
How to Automate? ........................................................................................................................ 22
Software Testing Tools? ........................................................................................................................ 22
Test Bed /Test Environment: ................................................................................................................ 22
Testing Methods ................................................................................................................................... 22
Black-Box Testing: ............................................................................................................................. 22
White-Box Testing:............................................................................................................................ 23
Grey-Box Testing: .............................................................................................................................. 24
Comparison of Testing Methods:...................................................................................................... 25
Testing Levels ........................................................................................................................................ 26
Functional Testing: ............................................................................................................................ 26
Smoke and Sanity Testing: ............................................................................................................ 27
Unit Testing: .................................................................................................................................. 28
Integration Testing: ....................................................................................................................... 29
System Testing: ............................................................................................................................. 31
Regression Testing: ....................................................................................................................... 32
Acceptance Testing: ...................................................................................................................... 32
Alpha Testing: ............................................................................................................................... 32
Beta Testing: ................................................................................................................................. 33
Non-Functional Testing: .................................................................................................................... 33
Performance Testing: .................................................................................................................... 33
Load Testing: ................................................................................................................................. 34
Stress Testing: ............................................................................................................................... 34
Usability Testing: ........................................................................................................................... 34
UI Vs Usability Testing ................................................................................................................... 35
Security Testing: ............................................................................................................................ 35
Portability Testing: ........................................................................................................................ 35
Compatibility Testing: ................................................................................................................... 36
Documentation ..................................................................................................................................... 36
Test Plan:........................................................................................................................................... 36
Attributes of Test Plan: ..................................................................................................................... 37
Test Scenario: .................................................................................................................................... 37
Test case:........................................................................................................................................... 38
Traceability Matrix: ........................................................................................................................... 38
Testing Challenges-Manual and Automation: ...................................................................................... 38
Myths .................................................................................................................................................... 40
Capability Maturity Model (CMM) ........................................................................................................ 42
About the Tutorial:
 Testing is the process of evaluating a system or its component(s) with the intent to
find whether it satisfies the specified requirements or not.
 Testing is executing a system in order to identify any gaps, errors,
 Or missing requirements in contrary to the actual requirements.
 This tutorial will give you a basic understanding on software testing,
 Its types, methods, levels, and other related terminologies.

Audience:
This tutorial is designed for Software testing professionals who would like to
Understand the Testing Framework in detail along with its
Types, methods, and levels. This tutorial provides enough ingredients to start with the
software testing process from where you can take yourself to higher levels of expertise.

Prerequisites:
Before proceeding with this tutorial, you should have a basic understanding of the software
development life cycle (SDLC).
In addition, you should have a basic understanding of software programming using any
programming language.

Overview:
What is testing?
Testing is the process of evaluating a system or its components(s) with the intent to find whether it
satisfies the specified requirement or not. In simple words, testing is executing a system in order to
identify any gaps, errors, or missing requirements in contrary to the actual requirements.

According to ANSI/IEEE 1059 standard, testing can be defined as –A Process of analysing a software
item to detect the differences between existing and required conditions ( that is defects/bugs) and
to evaluate the features of software items.

Who does Testing?


IT depends on the process and the associated stakeholders of the projects(s). In the IT industry, large
companies have a team with responsibilities to evaluate the developed software in context of the
given requirements. Moreover, developers also conduct testing which is called unit testing. In most
cases, the following professionals are involved in testing a system within their respective capacities:

 Software Tester
 Software Developer
 Project Lead/Manager
 End User
Different companies have different designation for people who test the software on the basis of
their experience and knowledge such as Software Tester, Software Quality Assurance Engineer, QA
Analysts, etc.

It is not possible to test the software at any time during its cycle. The next two sections state when
testing should be started and when to end it during the SDLC.

When to Start Testing?


An Early start to testing reduces the cost and time to rework and produce error-free software that is
delivered to the client. However in Software Development Life Cycle (SDLC), Testing can be started
from the ‘e ui e e t s Gathering phase and Continued till the deployment of the software. It also
depends on the development model that is being used. For Example, in the waterfall model, formal
testing is conducted in the testing phase; but in the incremental model, testing is performed at the
end of every increment/iteration and the whole application is tested at the end.

Testing is done in different forms at every Phase of SDLC.

 During the requirement gathering phase, the analysis and verification of the requirements
are also considered as testing.
 Reviewing the design in the design phase with the intent to improve the design is also
considered as testing.
 Testing performed by a developer on completion of code is also categorized as testing.

When to Stop Testing?


It is difficult to determine when to stop testing, as testing is a never-ending process and no one can
claim that a software is 100% tested. The following aspects are to be considered for stopping the
testing process:

 Testing Deadlines
 Completion of test cases execution
 Completion of functional and code coverage to be a certain point
 Bug rate falls below a certain level and no high priority bugs are identified.
 Management decision.

Why testing is necessary:


Software Testing is necessary because we all make mistakes. Some of those mistakes are
unimportant, but some of them are expensive or dangerous. We need to check everything and
anything we produce because things can always go wrong – humans make mistakes all the time.

Since we assume that our work may have mistaken, hence we all need to check our own work.
However some mistakes come from bad assumptions and blind spots, so we might make the same
mistakes when we check our own work as we made when we did it. So we may not notice the flaws
in what we have done.

Ideally, we should get someone else to check our work because another person is more likely to
spot the flaws.

There are several reasons which clearly tells us as why Software Testing is important and what are
the major things that we should consider while testing of any product or application.

Software testing is very important because of the following reasons:


1. Software testing is really required to point out the defects and errors that were made during
the development phases.
2. It s esse tial si e it akes su e of the Custo e s elia ilit a d thei satisfa tio i the
application.
3. It is very important to ensure the Quality of the product. Quality product delivered to the
customers helps in gaining their confidence. (Know more about Software Quality)
4. Testing is necessary in order to provide the facilities to the customers like the delivery of
high quality product or software application which requires lower maintenance cost and
hence results into more accurate, consistent and reliable results.
5. Testing is required for an effective performance of software application or product.
6. It s i po ta t to e su e that the appli atio should ot esult i to a failures because it
can be very expensive in the future or in the later stages of the development.
7. It s e ui ed to sta i the usi ess.

Principles of Testing:
There are seven principles of testing. They are as follows:

1) Testing shows presence of defects: Testing can show the defects are present, but cannot prove
that there are no defects. Even after testing the application or product thoroughly we cannot say
that the product is 100% defect free. Testing always reduces the number of undiscovered defects
remaining in the software but even if no defects are found, it is not a proof of correctness.

2) Exhaustive testing is impossible: Testing everything including all combinations of inputs and
preconditions is not possible. So, instead of doing the exhaustive testing we can use risks and
priorities to focus testing efforts. For example: In an application in one screen there are 15 input
fields, each having 5 possible values, then to test all the valid combinations you would need
30 517 578 125 (515) tests. This is very unlikely that the project timescales would allow for this
number of tests. So, accessing and managing risk is one of the most important activities and reason
for testing in any project.

3) Early testing: In the software development life cycle testing activities should start as early as
possible and should be focused on defined objectives.

4) Defect clustering: A small number of modules contains most of the defects discovered during pre-
release testing or shows the most operational failures.

5) Pesticide paradox: If the same kinds of tests are repeated again and again, eventually the same
set of test ases ill o lo ge e a le to fi d a e ugs. To o e o e this Pesti ide Pa ado , it
is really very important to review the test cases regularly and new and different tests need to be
written to exercise different parts of the software or system to potentially find more defects.

6) Testing is context depending: Testing is basically context dependent. Different kinds of sites are
tested differently. For example, safety – critical software is tested differently from an e-commerce
site.

7) Absence – of – errors fallacy: If the s ste uilt is u usa le a d does ot fulfil the use s eeds
and expectations then finding and fixing defects does not help.
Software Testing objectives and Purpose:
Software Testing has different goals and objectives. The major objectives of Software testing are as
follows:

 Finding defects which may get created by the programmer while developing the software.
 Gaining confidence in and providing information about the level of quality.
 To prevent defects.
 To make sure that the end result meets the business and user requirements.
 To ensure that it satisfies the BRS that is Business Requirement Specification and SRS that is
System Requirement Specifications.
 To gain the confidence of the customers by providing them a quality product.
 Software testing helps in finalizing the software application or product against business and
user requirements. It is very important to have good test coverage in order to test the
soft a e appli atio o pletel a d ake it su e that it s pe fo i g ell a d as pe the
specifications.
 While determining the test coverage the test cases should be designed well with maximum
possibilities of finding the errors or bugs. The test cases should be very effective. This
objective can be measured by the number of defects reported per test cases. Higher the
number of the defects reported the more effective are the test cases.
 Once the delivery is made to the end users or the customers they should be able to operate
it without any complaints. In order to make this happen the tester should know as how the
customers are going to use this product and accordingly they should write down the test
s e a ios a d desig the test ases. This ill help a lot i fulfilli g all the usto e s
requirements.

Software testing makes sure that the testing is being done properly and hence the system is ready
for use. Good coverage means that the testing has been done to cover the various areas like
functionality of the application, compatibility of the application with the OS, hardware and different
types of browsers, performance testing to test the performance of the application and load testing
to make sure that the system is reliable and should not crash or there should not be any blocking
issues. It also determines that the application can be deployed easily to the machine and without
any resistance. Hence the application is easy to install, learn and use.

Bug/Defect in Software Testing:


 A defect is an error or a bug, in the application which is created. A programmer
while designing and building the software can make mistakes or error. These
mistakes or errors mean that there are flaws in the software. These are called
defects.
 When actual result deviates from the expected result while testing a software
application or product then it results into a defect. Hence, any deviation from the
specification mentioned in the product functional specification document is a defect.
I diffe e t o ga izatio s it s alled diffe e tl like ug, issue, i ide ts o problem.

When the result of the software application or product does not meet with the end user
expectations or the software requirements then it results into a Bug or Defect. These defects or bugs
occur because of an error in logic or in coding which results into the failure or unpredicted or
unanticipated results.
Additional Information about Defects / Bugs:

 While testi g a soft a e appli atio o p odu t if la ge u e of defe ts a e fou d the it s


called Buggy.
 Whe a teste fi ds a ug o defe t it s required to convey the same to the developers. Thus
they report bugs with the detail steps and are called as Bug Reports, issue report, problem
report, etc.

Bug Vs Defect

Few argue that bug and Defect has its own definition.

 A bug is the result of a coding error


 A defect is a deviation from the requirements

Maybe an example would make it clearer.

Example: Client wanted the web form to be able to save and close the window.

Scenario #1: Web form have a save button, and another close button. Result: Defect, because client
wanted the 1 button to save and close the window. Developer misunderstood and created
separately. Because both buttons performed their requirements, it is not a bug, but a defect because
it didn't meet client's requirement.

Scenario #2: Web form have a save & close button, but only saves but does not close. Result: Bug.
Because the button does not perform as required/expected. Developer knows it is supposed to
produce that result but ultimately it didn't. (Perhaps coding error)

However, some people argue that bug is an error that is found before releasing the software,
whereas defect is one found by the customer.

Defect Life Cycle


Defect life cycle is a cycle which a defect goes through during its lifetime. It starts when defect is
found a d e ds he a defe t is losed, afte e su i g it s ot ep odu ed. Defect life cycle is
related to the bug found during testing.

Bug or Defect life cycle includes following steps or status:

 New: When a defect is logged and posted for the first time. Its state is given as new.
 Assigned: After the tester has posted the bug, the lead of the tester approves that the bug is
genuine and he assigns the bug to corresponding developer and the developer team. Its
state given as assigned.
 Fixed: When developer makes necessary code changes and verifies the changes then he/she
a ake ug status as Fi ed a d the ug is passed to testing team.
 Pending retest: After fixing the defect the developer has given that particular code for
retesting to the tester. Here the testing is pending on the testers end. Hence its status is
pending retest.

 Retest: At this stage the tester do the retesting of the changed code which developer has
given to him to check whether the defect got fixed or not.
 Verified: The tester tests the bug again after it got fixed by the developer. If the bug is not
present in the software, he approves that the bug is fixed and changes the status to
e ified .
 Reopen: If the bug still exists even after the bug is fixed by the developer, the tester
ha ges the status to eope ed . The ug goes th ough the life le o e agai .
 Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no
lo ge e ists i the soft a e, he ha ges the status of the ug to losed . This state ea s
that the bug is fixed, tested and approved.
 Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug,
the o e ug status is ha ged to dupli ate .
 Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then
 Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next
releases. The reasons for changing the bug to this state have many factors. Some of them
are priority of the bug may be low, lack of time for the release or the bug may not have
major effect on the software.
 Not a bug: The state gi e as Not a ug if the e is o ha ge i the fu tio alit of the
application. For an example: If customer asks for some change in the look and field of the
application like change of colour of some text then it is not a bug but just some change in
the looks of the application.

The bug has different states in the Life Cycle. The Life cycle of the bug can be shown
diagrammatically as follows.
Difference between Severity and Priority:
There are two key things in defects of the software testing they are:

1) Severity

2) Priority

What is the difference between Severity and Priority?

1) Severity:

It is the extent to which the defect can affect the software. In other words it defines the impact that
a given defect has on the system. For example: If an application or web page crashes when a remote
link is clicked, in this case clicking the remote link by an user is rare but the impact of application
crashing is severe. So the severity is high but priority is low.

Severity can be of following types:

 Critical: The defect that results in the termination of the complete system or one or more
component of the system and causes extensive corruption of the data. The failed function is
unusable and there is no acceptable alternative method to achieve the required results then
the severity will be stated as critical.
 Major: The defect that results in the termination of the complete system or one or more
component of the system and causes extensive corruption of the data. The failed function is
unusable but there exists an acceptable alternative method to achieve the required results
then the severity will be stated as major.
 Moderate: The defect that does not result in the termination, but causes the system to
produce incorrect, incomplete or inconsistent results then the severity will be stated as
moderate
 Minor: The defect that does not result in the termination and does not damage the usability
of the system and the desired results can be easily obtained by working around the defects
then the severity is stated as minor.
 Cosmetic: The defect that is related to the enhancement of the system where the changes
are related to the look and field of the application then the severity is stated as cosmetic.

Priority:

Priority defines the order in which we should resolve a defect. Should we fix it now, or can it wait?
This priority status is set by the tester to the developer mentioning the time frame to fix the defect.
If high priority is mentioned then the developer has to fix it at the earliest. The priority status is set
based on the customer requirements. For example: If the company name is misspelled in the home
page of the website, then the priority is high and severity is low to fix it.

Priority can be of following types:

 Low: The defect is an irritant which should be repaired, but repair can be deferred until after
more serious defect have been fixed.
 Medium: The defect should be resolved in the normal course of development activities. It
can wait until a new build or version is created.
 High: The defect must be resolved as soon as possible because the defect is affecting the
application or the product severely. The system cannot be used until the repair has been
done.

Few very important scenarios related to the severity and priority which are asked during the
interview:

 High Priority & High Severity: An error which occurs on the basic functionality of the
application and will not allow the user to use the system. (Eg. A site maintaining the student
details, o sa i g e o d if it, does t allo to sa e the e o d the this is high p io it a d
high severity bug.)
 High Priority & Low Severity: The spelling mistakes that happens on the cover page or
heading or title of an application.
 High Severity & Low Priority: An error which occurs on the functionality of the application
(for which there is no workaround) and will not allow the user to use the system but on click
of link which is rarely used by the end user.
 Low Priority and Low Severity: Any cosmetic or spelling issues which is within a paragraph
or in the report (Not on cover page, heading, title)

Defect Report or Bug Report:

This Defect report or Bug report consists of the following information:

 Defect ID – E e ug o defe t has it s u i ue identification number


 Defect Description – This includes the abstract of the issue.
 Product Version – This includes the product version of the application in which the defect is
found.
 Detail Steps – This includes the detailed steps of the issue with the screenshots attached so
that developers can recreate it.
 Date Raised – This includes the Date when the bug is reported
 Reported By – This includes the details of the tester who reported the bug like Name and ID
 Status – This field includes the Status of the defect like New, Assigned, Open, Retest
Verification Closed, Failed, Deferred, etc.
 Fixed by – This field includes the details of the developer who fixed it like Name and ID
 Date Closed – This includes the Date when the bug is closed
 Severity – Based on the severity (Critical, Major or Minor) it tells us about impact of the
defect or bug in the software application
 Priority – Based on the Priority set (High/Medium/Low) the order of fixing the defect can be
made

Test Process:
Testing is a process rather than a single activity. This process starts from test planning then
designing test cases, preparing for execution and evaluating status till the test closure. So, we
can divide the activities within the fundamental test process into the following basic steps:

1. Planning and Control


2. Analysis and Design
3. Implementation and Execution
4. Evaluating exit criteria and Reporting
5. Test Closure Activity.

Planning and Control:


Test Planning has following major tasks

 To determine the scope and risks and identify the objectives of testing
 To determine the test approach.
 To implement the test policy and/or the test strategy. (Test strategy is an outline
that describes the testing portion of the software development cycle. It is created to
inform PM, testers and developers about some key issues of the testing process. This
includes the testing objectives, method of testing, total time and resources required
for the project and the testing environments.).
 To determine the required test resources like people, test environments, PCs, etc.
 To schedule test analysis and design tasks, test implementation, execution and
evaluation.
 To determine the Exit criteria we need to set criteria such as Coverage criteria. (Coverage
criteria are the percentage of statements in the software that must be executed during
testing. This will help us track whether we are completing test activities correctly. They will
show us which tasks and checks we must complete for a particular level of testing before
we can say that testing is finished.)

Test control has following task:

 To measure and analyse the results of reviews and testing


 To Monitor and document progress, test coverage and exit criteria
 To provide information on testing.
 To initiate corrective actions
 To make decisions.

Analysis and Design:


Test analysis and Test Design has the following major tasks:
i. To review the test basis. (The test basis is the information we need in order to start the
test analysis and eate ou o test ases. Basi all it s a do u e tatio o hi h test
cases are based, such as requirements, design specifications, product risk analysis,
architecture and interfaces. We can use the test basis documents to understand what the
system should do once built.)
ii. To identify test conditions.
iii. To design the tests.
iv. To evaluate testability of the requirements and system.
v. To design the test environment set-up and identify and required infrastructure and tools.

Implementation and Execution:


During test implementation and execution, we take the test conditions into test cases and
procedures and other testware such as scripts for automation, the test environment and any other
test infrastructure. (Test cases is a set of conditions under which a tester will determine whether
an application is working correctly or not.)
(Testware is a term for all utilities that serve in combination for testing a software like scripts, the
test environment and any other test infrastructure for later reuse.)

Test implementation has the following major task:


i. To develop and prioritize our test cases by using techniques and create test data for those
tests. (In order to test a software application you need to enter some data for testing most
of the features. Any such specifically identified data which is used in tests is known as test
data.)
We also write some instructions for carrying out the tests which is known as test procedures.
We may also need to automate some tests using test harness and automated tests scripts.
(A test harness is a collection of software and test data for testing a program unit by running
it under different conditions and monitoring its behaviour and outputs.)
ii. To create test suites from the test cases for efficient test execution.
(Test suite is a collection of test cases that are used to test a software program to show
that it has some specified set of behaviours. A test suite often contains detailed instructions
and information for each collection of test cases on the system configuration to be used
during testing. Test suites are used to group similar test cases together.)
iii. To implement and verify the environment.
Test execution has the following major task:
i. To execute test suites and individual test cases following the test procedures.
ii. To re-execute the tests that previously failed in order to confirm a fix. This is known as
confirmation testing or re-testing.
iii. To log the outcome of the test execution and record the identities and versions of the
software under tests. The test log is used for the audit trial. (A test log is nothing but, what
are the test cases that we executed, in what order we executed, who executed that test
cases and what is the status of the test case (pass/fail). These descriptions are documented
and called as test log.).
iv. To Compare actual results with expected results.
v. Where there are differences between actual and expected results, it report discrepancies
as Incidents.

Evaluating Exit criteria and Reporting:


Based on the risk assessment of the project we will set the criteria for each test level against which
e ill easu e the e ough testi g . These ite ia a f o p oje t to project and are known as
Exit Criteria

Exit criteria comes into picture when

1. Maximum test cases are executed with certain pass percentage.


2. Bug rate fall below certain level
3. When achieved the deadline.

Evaluating Exit Criteria has the following major tasks:

1. To check the test logs against the exit criteria specified in test planning.
2. To assess if more test are needed or if the exit criteria specified should be changed.
3. To write a test summary report for stakeholders.

Test Closure activities:


Test closure activities are done when software is delivered. The testing can be closed for the other
reasons also like:

 When all the information has been gathered which are needed for the testing.
 When a project is cancelled.
 When some target is achieved.
 When a maintenance release or update is done.

Test Closure activities have the following major tasks:

1. To check which planned deliverables are actually delivered and to ensure that all
incident reports have been resolved.
2. To Finalize and archive testware such as scripts, test environments etc. for later reuse
3. To handover the testware to the maintenance organization. They will give support to
the software.
4. To evaluate how the testing went and learn lessons for future releases and projects.

Software Development Life Cycle Model or Methodologies


The software development models are the various processes or methodologies that are being
selected for the development of the project depending on the project’s aims and goals. There
are many development life cycle models that have been developed in order to achieve
different required objectives. The models specify the various stages of the process and the
order in which they are carried out.

The selection of model has very high impact on the testing that is carried out. It will define
the what, where and when of our planned testing, influence regression testing and largely
determines which test techniques to use.

There are various Software development models or methodologies. They are as follows:

1. Waterfall model
2. V model
3. Incremental model
4. RAD model
5. Agile model
6. Iterative model
7. Spiral model
8. Prototype model

Waterfall Model
The Waterfall Model was first Process Model to be introduced. It is also referred to as a linear-
sequential life cycle model. It is very simple to understand and use. In a waterfall model, each
phase must be completed fully before the next phase can begin. This type of model is basically used
for the project which is small and there are no uncertain requirements. At the end of each phase, a
review takes place to determine if the project is on the right path and whether or not to continue or
discard the project. In this model the testing starts only after the development is complete. In
waterfall model phases do not overlap.
Diagram of Waterfall-model:

Advantages of waterfall model:

 This model is simple and easy to understand and use.


 It is easy to manage due to the rigidity of the model – each phase has
specific deliverables and a review process.
 In this model phases are processed and completed one at a time. Phases
do not overlap.

Waterfall model works well for smaller projects where requirements are very
well understood.

Disadvantages of waterfall model:

 Once an application is in the testing stage, it is very difficult to go back and change
something that was not well-thought out in the concept stage.
 No working software is produced until late during the life cycle.
 High amounts of risk and uncertainty.
 Not a good model for complex and object-oriented projects.
 Poor model for long and ongoing projects.
 Not suitable for the projects where requirements are at a moderate to high risk of
changing.

When to use the waterfall model:

 This model is used only when the requirements are very well known, clear
and fixed.
 Product definition is stable.
 Technology is understood.
 There are no ambiguous requirements
 Ample resources with required expertise are available freely
 The project is short.

Very less customer enter action is involved during the development of the
product. Once the product is ready then only it can be demoed to the end users.
Once the product is developed and if any failure occurs then the cost of fixing
such issues are very high, because we need to update everywhere from
document till the logic.

Agile Model:
Agile development model is also a type of Incremental model. Software is developed
in incremental, rapid cycles. This results in small incremental releases with each
release building on previous functionality. Each release is thoroughly tested to
ensure software quality is maintained. It is used for time critical
applications. Extreme Programming (XP) is currently one of the most well-
known agile development life cycle model.

Diagram of Agile model:


Advantages of Agile model:

 Customer satisfaction by rapid, continuous delivery of useful software.


 People and interactions are emphasized rather than process and tools. Customers,
developers and testers constantly interact with each other.
 Working software is delivered frequently (weeks rather than months).
 Face-to-face conversation is the best form of communication.
 Close, daily cooperation between business people and developers.
 Continuous attention to technical excellence and good design.
 Regular adaptation to changing circumstances.
 Even late changes in requirements are welcomed

Disadvantages of Agile model:

 In case of some software deliverables, especially the large ones, it is difficult to assess
the effort required at the beginning of the software development life cycle.
 There is lack of emphasis on necessary designing and documentation.
 The project can easily get taken off track if the customer representative is not clear
what final outcome that they want.
 Only senior programmers are capable of taking the kind of decisions required during
the development process. Hence it has no place for newbie programmers, unless
combined with experienced resources.

When to use agile model:

 When new changes are needed to be implemented. The freedom agile gives to change
is very important. New changes can be implemented at very little cost because of the
frequency of new increments that are produced.
 To implement a new feature the developers need to lose only the work of a few days,
or even only hours, to roll back and implement it.
 Unlike the waterfall model in agile model very limited planning is required to get
started with the project. Agile assumes that the end users’ needs are ever changing in
a dynamic business and IT world. Changes can be discussed and features can be
newly effected or removed based on feedback. This effectively gives the customer the
finished system they want or need.
 Both system developers and stakeholders alike, find they also get more freedom of
time and options than if the software was developed in a more rigid sequential way.
Having options gives them the ability to leave important decisions until more or better
data or even entire hosting programs are available; meaning the project can continue
to move forward without fear of reaching a sudden standstill.

V-Model Method:
V- Model means Verification and Validation model. Just like the waterfall model, the V-
Shaped life cycle is a sequential path of execution of processes. Each phase must be
completed before the next phase begins. Testing of the product is planned in parallel with a
corresponding phase of development in V-model.

Diagram of V-model:
The various phases of the V-model are as follows:

Requirements like BRS and SRS begin the life cycle model just like the waterfall model.
But, in this model before development is started, a system test plan is created. The test plan
focuses on meeting the functionality specified in the requirements gathering.

The high-level design (HLD) phase focuses on system architecture and design. It provide
overview of solution, platform, system, product and service/process. An integration test plan
is created in this phase as well in order to test the pieces of the software systems ability to
work together.

The low-level design (LLD) phase is where the actual software components are designed. It
defines the actual logic for each and every component of the system. Class diagram with all
the methods and relation between classes comes under LLD. Component tests are created in
this phase as well.

The implementation phase is, again, where all coding takes place. Once coding is complete,
the path of execution continues up the right side of the V where the test plans developed
earlier are now put to use.

Coding: This is at the bottom of the V-Shape model. Module design is converted into code by
developers.

Advantages of V-model:

 Simple and easy to use.


 Testing activities like planning, test designing happens well before coding. This saves
a lot of time. Hence higher chance of success over the waterfall model.
 Proactive defect tracking – that is defects are found at early stage.
 Avoids the downward flow of the defects.
 Works well for small projects where requirements are easily understood.

Disadvantages of V-model:

 Very rigid and least flexible.


 Software is developed during the implementation phase, so no early prototypes of the
software are produced.
 If any changes happen in midway, then the test documents along with requirement
documents has to be updated.

When to use the V-model:

 The V-shaped model should be used for small to medium sized projects where
requirements are clearly defined and fixed.
 The V-Shaped model should be chosen when ample technical resources are available
with needed technical expertise.
High confidence of customer is required for choosing the V-Shaped model approach. Since,
no prototypes are produced, there is a very high risk involved in meeting customer
expectations.

Verification & Validation


These two terms are very confusing for most people, who use them interchangeably. The following
table highlights the difference between verification and validation.

Sl .No Verification Validation


Verification address the concern: "Are Validatio add esses the o e : A e
1 you building it right?" you building the right thing?"
Ensured that the software system Ensures that the functionalities meet
2 meets all the functionality. the intended behaviour.
Verification takes place first and Validation occurs after verification and
includes the checking for mainly involves the checking of the
3 documentation, code etc. overall product
4 Done by Developers Done by testers
it has static activities, as it includes It has Dynamic activities, as it includes
collecting reviews, walkthroughs, and executing the software against the
5 inspections to verify a software requirements.
it is an objective process and no It is subjective process and involves
subjective decision should be needed to subjective decisions on how well a
6 verify a software software works.

QA, QC, AND TESTING


Most people get confused when it comes to pin down the differences among Quality Assurance,
Quality Control, and Testing. Although they are interrelated and to some extent, they can be
considered as same activities, but there exist distinguishing points that set them apart. The following
table lists the points that differentiate QA, QC, and Testing.

Quality Assurance Quality Control Testing

QA includes activities that ensure


the implementation of processes, It includes activities that ensure
procedures and standards in the verification of a developed It includes activities that
context to verification of developed software with respect to ensure the identification
software and intended documented (or no in some cases) of bugs/errors/defects in a
requirements. requirements. software
Focuses on actual testing by
executing the software with an aim
Focuses on processes and to identify bug/defect through
Procedures rather than conducting implementation of Procedures and
actual testing on the system process. Focuses on actual testing
Process-oriented activities Product-oriented activities Product-oriented activities
Preventive activities It is corrective Process It is a preventive Process

It is subset of Software Test Life QC can be considered as subset of Testing is the Subset of
Cycle(STLC) Quality Assurance Quality Control.

Audit and Inspection:


Audit: It is Systematic process to determine how the actual testing process is conducted within an
organization or a team. Generally, it is an independent examination of processes involved during the
testing of a software. As per IEEE, it is review of documents processes that organization implement
and follow. Types of audit include Legal Compliance Audit, Internal Audit, and System Audit.

Inspection: It is a formal technique that involves formal or informal technical reviews of any artefact
by identifying any error or gap. As per IEEE94, inspection is a formal evaluation technique in which
software requirements, designs, or codes are examined in detail by a person a group other than the
author to detect faults, violations of development standards, and other problems.

Testing and Debugging:


Testing: It involves identifying bug/error/defect in a software without correcting it. Normally
p ofessio als ith a ualit assu a e a kg ou d a e i ol ed i ug s ide tifi atio . Testi g is
performed in the testing phase.

Debugging: It involves identifying, isolating, and fixing the problems/bugs. Developers who code the
software conduct debugging upon encountering an error in the code. Debugging is a part of white
Box Testing or unit testing. Debugging can be performed in the development phase while conducting
Unit Testing or in phases while fixing the reported bugs.

TYPES OF TESTING
This section describes the different types of testing that may be used to test a software during SDLC.

Manual Testing
Manual testing includes testing a software manually, i.e., without using any automated tool or any
script. In this type, the tester takes over the role of an end-user and tests the software to identify
any unexpected behaviour or bug. There are different stages of manual testing such as unit testing,
integration testing, system testing and user acceptance testing.

Testers use test plans, test cases, or test scenarios to test a software to ensure the completeness of
testing. Manual testing also includes exploratory testing, as testers explore the software to identify
errors in it.
Automation Testing
Automation testing, which is also known as Test Automation, is when the tester writes scripts and
uses another software to test the product. This Process involves automation of a manual process.
Automation Testing is used to re-run the test scenarios that were performed manually, quickly and
repeatedly.

Apart from regression testing, automation testing is also used to test the application rom load,
performance, and stress point of view. It increases the test coverage, improves accuracy, and saves
time and money in comparison to manual testing.

What to Automate?
It is not possible to automate everything in a software. The areas at which a user can make
transactions such as the login form or registration forms, any area where large number of
users can access the software simultaneously should be automated.
Furthermore, all GUI items, connections with databases field validation, etc. can be
efficiently tested by automation the manual process.
When to Automate?
Test Automation should be used by considering the following aspects of a software:

 Large and critical projects


 Projects that require testing the same areas frequently
 Requirements not changing frequently

How to Automate?
Automation is done by using a supportive computer language like VB scripting and an automated
software application. There are many tools available that can be used to write automation scripts.
Before mentioning the tools, let us identify the process that can be used to automate the testing
process:

 Identifying areas within a software for automation


 Selection of appropriate tool for test automation
 Writing test scripts
 Development of test suits
 Execution of scripts.
 Create result reports
 Identify any potential bug or performance issues.

Software Testing Tools?


 Selenium
 Robotium
 Monkey Runner
 Appium
 UIAutomator
 Coded UI
Test Bed /Test Environment:
Test Environment: The hardware and software environment in which tests will be run, and any other
software with which the software under test interacts when under test including stubs and test
drivers.

Test Bed: An execution environment configured for testing. May consist of specific hardware, OS,
network topology, configuration of the product under test, other application or system software, etc.
The Test Plan for a project should enumerated the test beds(s) to be used.

Testing Methods
There are different methods that can be used for software testing. This chapter briefly
describes the methods available.

Black-Box Testing:
The technique of testing without having any knowledge of the interior working of the
application is called black-box testing. The tester is oblivious to the system architecture and
does not have access to the source code. Typically, while performing a black-box test, a
teste ill i te a t ith the s ste s use i te fa e p o idi g i puts a d e a i i g
outputs without knowing how and where the inputs are worked upon.
The Following table list the advantages and disadvantages of black-box testing.

Advantage Disadvantage
Well Suited and efficient for large Limited coverage, since only a selected number of test scenarios is
code segments. actually performed.

Code access is not required. Inefficient testing, due to the fact that the tester only has limited
knowledge about an application

Clearly separates user's perspective Blind coverage, since the tester cannot target specific code
from the developer's perspective segments or error-prone areas.
through visibly defined roles.
Large number of moderately skilled The test cases are difficult to design.
testers can test the application with
no knowledge of implementation,
programming language, or
operating systems.

White-Box Testing:
White- Box Testing is the detailed investigation of internal logic and structure of the code.
White-box testing is also called as glass testing or open-box testing. In order to perform
white-box testing on an application, a tester needs to know the internal workings of the
code.
The tester needs to have a look inside the source code and find out which chunk of the code
is behaving inappropriately.
The following table lists the advantage and disadvantage of white-box testing.

Advantage Disadvantage

As the tester has


knowledge of the source
code, it becomes very easy
to find out which type of
data can help in testing the Due to the fact that a skilled tester is needed to perform white-box
application effectively. testing, the costs are increased.

It helps in optimizing the Sometimes it is impossible to look into every nook and corner to find out
code. hidden errors that may create problems, as many paths will go untested.

Extra lines of code can be


removed which can bring in It is difficult to maintain white-box resting, as it requires specialized tools
hidden facts like code analysers and debugging tools.
Due to the tester's
knowledge about the code,
maximum coverage is
attained during test
scenario writing.

Grey-Box Testing:
Grey-box testing is a technique to test the application with having a limited knowledge of
the internal workings of an application. In software testing, the phrase the more you know,
the better carries a lot of weight while testing an application.
Mastering the domain of a system always gives the tester an edge over someone with
limited domain knowledge. Unlike black-box testing, where the tester only tests the
appli atio s use i te fa e; i g e -box testing, the tester has access to design documents
and the database. Having this knowledge, a tester can prepare better test data and test
scenarios while making a test plan.

Advantage Disadvantage
Offers combined benefits of Since the access to Source code is not available, the ability to go
black-box and white-box over the code and test coverage is limited.
testing wherever possible.

Grey box testers don't rely on The tests can be redundant if the software designer has already
the source code; instead they run test case.
rely on interface definition
and functional Specifications.
Based on the limited Testing every possible input stream is unrealistic because it
information available, a grey- would take an unreasonable amount of time; therefore, many
box tester can design program paths will go untested.
excellent test scenarios
especially around
communication protocols and
data type handling.

Comparison of Testing Methods:


The following table lists the points that differentiate black-box testing, grey-box testing, and
white-box testing.

Black-box Testing Grey-Box Testing White-Box Testing


The Internal workings of an The tester has limited Tester has full knowledge of
application need not be knowledge of the internal the internal workings of the
known. workings of the application. application.

Also known as closed-box Also known as translucent Also known as clear-box


testing, data-driven testing or testing, as the tester has testing, structural testing, or
functional testing. limited knowledge of the code-based testing.
insides of the application.

Performed by end-users and Performed by end-users and Normally done by testers and
also by testers and developers. also by testers and developers. developers.

Testing is based on external Testing is done on the basis of Internal workings are fully
expectations, internal high-level database diagram known and the tester can
behaviour of the application is and data flow diagram. design test data accordingly.
unknown
It is exhaustive and the least Partly time-Consuming and The most exhaustive and time-
time-Consuming. exhaustive. consuming type of testing.

Not Suited for algorithm Not suited for algorithm Suited for algorithm testing.
testing. testing.
This can only be done by trial- Data domains and internal Data domains and internal
and-error method. boundaries can be tested, if boundaries can be better
known. tested.

Testing Levels
There are different levels during the process of testing. In this chapter, a brief
description is provided about these levels.
Levels of testing include different methodologies that can be used while conducting
software testing. The main levels of software testing are:

 Functional Testing
 Non-Functional testing

Functional Testing:
This is type of black-box testing that is based on the specifications of the software that is to
be tested. The application is tested by providing input and then the results are examined
that need to conform to the functionality it was intended for. Functional testing of a
soft a e is o du ted o a o plete, i teg ated s ste to e aluate the s ste s
compliance with its specified requirements.
There are five steps that are involved while testing an application for functionality.
Steps Description
1 The determination of the functionality that the intended application is meant to
perform.
2 The creation of test data based on the specifications of the application.

3 The Output based on the test data and the specifications of the application.

4 The Writing of test scenarios and the execution of test cases.

5 The Comparison of actual and expected results based on the executed test cases.

An effective testing practice will see the above steps applied to the testing policies of every
organization and hence it will make sure that the organization maintains the strictest of
standards when it comes to software quality.
Smoke and Sanity Testing:

Smoke Testing:

It is performed after software build to ascertain that the critical functionalities of the
program is working fine. It is executed "before" any detailed functional or regression tests
are executed on the software build. The purpose is to reject a badly broken application, so
that the QA team does not waste time installing and testing the software application.
In Smoke Testing, the test cases chosen cover the most important functionality or
component of the system. The objective is not to perform exhaustive testing, but to verify
that the critical functionalities of the system is working fine.
For Example a typical smoke test would be - Verify that the application launches successfully,
Check that the GUI is responsive ... etc.

Sanity Testing:

After receiving a software build, with minor changes in code, or functionality, Sanity testing
is performed to ascertain that the bugs have been fixed and no further issues are introduced
due to these changes. The goal is to determine that the proposed functionality works
roughly as expected. If sanity test fails, the build is rejected to save the time and costs
involved in a more rigorous testing.

The objective is "not" to verify thoroughly the new functionality, but to determine that the
developer has applied some rationality (sanity) while producing the software. For instance,
if ou e scientific calculator gives the result of 2 + 2 =5! Then, there is no point testing the
advanced functionalities like sin 30 + cos 50.

Smoke Testing Sanity Testing


Smoke Testing is performed to ascertain
Sanity Testing is done to check the new functionality /
that the critical functionalities of the
bugs have been fixed
program is working fine
The objective of this testing is to verify The objective of the testing is to verify the "rationality" of
the "stability" of the system in order to the system in order to proceed with more rigorous
proceed with more rigorous testing testing
This testing is performed by the
Sanity testing is usually performed by testers
developers or testers
Smoke testing is usually documented or Sanity testing is usually not documented and is
scripted unscripted
Smoke testing is a subset of Regression
Sanity testing is a subset of Acceptance testing
testing
Smoke testing exercises the entire Sanity testing exercises only the particular component of
system from end to end the entire system
Smoke testing is like General Health
Sanity Testing is like specialized health check up
Check Up

Points to note.

 Both sanity tests and smoke tests are ways to avoid wasting time and effort by
quickly determining whether an application is too flawed to merit any rigorous
testing.
 Sanity Testing is also called tester acceptance testing.
 Smoke testing performed on a particular build is also known as a build verification
test.
 One of the best industry practice is to conduct a Daily build and smoke test in
software projects.
 Both smoke and sanity tests can be executed manually or using an automation
tool. When automated tools are used, the tests are often initiated by the same
process that generates the build itself.
 As per the needs of testing, you may have to execute both Sanity and Smoke Tests
on the software build. In such cases you will first execute Smoke tests and then go
ahead with Sanity Testing. In industry, test cases for Sanity Testing are commonly
combined with that for smoke tests, to speed up test execution. Hence it's a
common that the terms are often confused and used interchangeably

Unit Testing:
This type of testing is performed by developers before the setup is handed over to the
testing team to formally execute the test cases. Unit testing is performed by the respective
developers on the individual units of source code assigned areas. The developers use test
data that is different from the test data of the quality assurance team.
The goal of unit testing is to isolate each part of the program and show that individual parts
are correct in terms of requirements and functionality.
Limitations of Unit Testing:
Testing cannot catch each and every bug in an application. It is impossible to evaluate every
execution path in every software application. The same is the case with unit testing.
There is a limit to the number of scenarios and test data that a developer can use to verify a
source code. After having exhausted all the options, there is no choice but to stop unit
testing and merge the code segment with other units.

Integration Testing:
 Integration testing tests integration or interfaces between components, interactions to
different parts of the system such as an operating system, file system and hardware or
interfaces between systems.
 Also after integrating two different components together we do the integration testing. As
displayed in the image below when two different modules ‘Module A’ and ‘Module B’ are
integrated then the integration testing is done.

 Integration testing is done by a specific integration tester or test team.


 Integration testing follows two approach known as ‘Top Down’ approach and
‘Bottom Up’ approach as shown in the image below:

Below are the integration testing techniques:

1. Big Bang integration testing: In Big Bang integration testing all components or modules
are integrated simultaneously, after which everything is tested as a whole. As per the below
image all the modules from ‘Module 1′ to ‘Module 6′ are integrated simultaneously then the
testing is carried out.

Advantage: Big Bang testing has the advantage that everything is finished before integration
testing starts.

Disadvantage: The major disadvantage is that in general it is time consuming and difficult
to trace the cause of failures because of this late integration.

2. Top-down integration testing: Testing takes place from top to bottom, following the
control flow or architectural structure (e.g. starting from the GUI or main menu).
Components or systems are substituted by stubs. Below is the diagram of ‘Top down
Approach':
Advantages of Top-Down approach:

 The tested product is very consistent because the integration testing is basically
performed in an environment that almost similar to that of reality
 Stubs can be written with lesser time because when compared to the drivers then
Stubs are simpler to author.

Disadvantages of Top-Down approach:

 Basic functionality is tested at the end of cycle

3. Bottom-up integration testing: Testing takes place from the bottom of the control flow
upwards. Components or systems are substituted by drivers. Below is the image of ‘Bottom
up approach':

Advantage of Bottom-Up approach:

 In this approach development and testing can be done together so that the product or
application will be efficient and as per the customer specifications.

Disadvantages of Bottom-Up approach:

 We can catch the Key interface defects at the end of cycle


 It is required to create the test drivers for modules at all levels except the top control
System Testing:
System testing tests the system as a whole. Once all the components are integrated, the
application as a whole is tested rigorously to see that it meets the specified quality
standards. This type of testing is performed by a specialized testing team.
System testing is important because of the following reasons:

 System testing is the first step in the Software Development Life Cycle, Where the
application is tested as a whole.
 The application is tested thoroughly to verify that it meets the functional and
technical specifications.
 The application is tested in an environment that is very close to the production
environment where the application will be deployed.
 System testing enables us to test, verify, and validate both the business
requirements as well as the application architecture.

Regression Testing:
Whenever a change in a software application is made, it is quite possible that other areas
within the application have been affected by this change, Regression testing is performed to
e if that a fi ed ug has t esulted i a othe fu tio alit o usi ess ule iolatio . The
intent of regression testing is to ensure that a change, such as bug fix should not result in
another fault being uncovered in the application.
Regression testing is important because of the following reasons:

 Minimize the gaps in testing when an application with changes made has to be
tested.
 Testing the new changes to verify that the changes made did not affect any other
area of the application.
 Mitigates risks when regression testing is performed on the application.
 Test coverage is increased without compromising timelines.
 Increase speed to market the product.
Acceptance Testing:
This is arguably the most important types of testing, as it is conducted by the Quality
Assurance Team who will gauge whether the application meets the intended specifications
a d satisfies the lie t s requirement. The QA team will have a set of pre-written scenarios
and test cases that will be used to test the application.
More Ideas will be shared about the application and more tests can be performed on it to
gauge its accuracy and the reasons why the project was initiated. Acceptance tests are not
only intended to point out simple spelling mistakes, cosmetic errors, or interface gaps, but
also to point out any bugs in the application that will result in system crashes or major
errors in the application.
By performing acceptance tests on an application, the testing team will deduce how the
application will perform in production. There are also legal and contractual requirements for
acceptance of the system.
Alpha Testing:
This test is the first stage of testing and will be performed amongst the teams (developers
and QA teams). Unit testing, Integration testing and system testing when combined
together is known as alpha testing. During this phase, the following aspects will be tested in
the application:

 Spelling Mistakes
 Broken links
 The Application will be tested on machines with the lowest specification to test
loading times and any latency problems.

Beta Testing:
This test is performed after alpha testing has been successfully performed. In beta testing, a
sample of the intended audience tests the application. Beta testing is also known as pre-
release testing. Beta test versions of software area ideally distributed to a wide audience on
the web, partly to provide a preview of the next release. In this phase, the audience will be
testing the following:

 Users will install, run the application and send their feedback to the project team.
 Typographical errors, confusing application flow, and even crashes.
 Getting the feedback, the project team can fix the problems before releasing the
software to the actual users.
 The more issues you fix that solve real user problems, the higher the quality of your
application will be.
 Having a higher-quality application when your release it to the general public will
increase customer satisfaction.

Non-Functional Testing:
This section is based upon testing an application from its non-functional attributes. Non-
functional testing involves testing a software from the requirements which are non-
functional in nature but important such as performance, security, user interface, etc.
Some of the important and commonly used non-functional testing types are discussed
below.
Performance Testing:
It is mostly used to identify any bottlenecks or performance issues rather than finding bugs
in a software. There are different causes that contribute in lowering the performance of a
software:

 Network delay
 Client-side processing
 Database transaction processing
 Loan balancing between servers
 Data rendering
Performance testing is considered as one of the important and mandatory testing type in
terms of the following aspects:

 Speed
 Stability
 Scalability
Performance testing can be either qualitative or quantitative and can be divided into
different sub-types such as load testing and stress testing.
Load Testing:
It is a process of testing the behaviour of a software by applying maximum load in terms of
software accessing and manipulating large input data. It can be done at both normal and
peak load conditions. This type of testing identifies the maximum capacity of software and it
behaviour at peak time.
Most of the time, load testing is performed with the help of automated tools such as load
Runner, Apache JMeter, Visual Studio Load test etc.
Virtual users are defined in the automated testing tool and the script is executed to verify
the load testing for the software. The number of users can be increased or decreased
concurrently or incrementally upon the requirements
Stress Testing:
Stress testing includes testing the behaviour of a software under abnormal conditions. For
example, it may include taking away some resources or applying a load beyond the actual
load limit.
The aim of stress testing is to test the software by applying the load to the system and
taking over the resources used by the software to identify the breaking point. This testing
can be performed by testing different scenarios such as:
Shutdown or restart of network ports randomly
Turning the databases on or off
Running different processed that consume resources such as CPU, memory, server, etc.
Usability Testing:
Usability testing is a black-box technique and is used to identify any error(s) and
improvements in the software by observing the users through their usage and operation.
According to Nielsen, usability can be defined in terms of five factors, i.e efficiency of use,
learn-ability, memory-ability, errors/safety, and satisfaction. According to him, the usability
of a product will be good and the system is usable if it possesses the above factors.
Nigel Bevan and Macleod considered that usability is the quality requirement that can be
measures as the outcome of interactions with a computer system. This requirement can be
fulfilled and the end-user will be satisfied if the intended goals are achieved effectively with
the user of proper resources.
Molich in 2000 stated that a user-friendly system should fulfil the following five goals, i.e.,
easy to learn, easy to remember, efficient to user, satisfactory to use and easy to
understand.

UI Vs Usability Testing
UI testing involves testing the Graphical User Interface of the software. UI testing ensures
that the GUI functions according to the requirements and tested in terms of colour,
alignment, size and other properties.
On the other hand, Usability testing ensures a good and user-friendly GUI that can be easily
handled, UI testing can be considered as sub-part of usability testing.
Security Testing:
Security testing involves testing a software in order to identify any flaws and gaps from
security and vulnerability point of view. Listed below are the main aspects that security
testing should ensure:

 Confidentiality
 Integrity
 Authentication
 Availability
 Authorization
 Non-repudiation
 Software is secure against known and unknown vulnerabilities.
 Software data is secure
 Software is according to all security regulations
 Input checking and validation
 SQL insertion attacks
 Injection flaws
 Session Management issues
 Cross-site scripting attacks
 Buffer overflows vulnerabilities
 Directory traversal attacks
Portability Testing:
Portability testing includes testing a software with the aim to ensure its reusability and that
it can be moved from another software as well. Following are the strategies that can be
used for portability testing:

 Transferring an installed software from one computer to another


 Building executables (.exe) to run the software on different platforms.
Portability testing can be considered as one of the sub-parts of system testing, as this
testing types includes overall testing of a software with respect to its usage over different
environments. Computer hardware, operating systems, and browsers are the major focus of
portability testing. Some of the pre-Conditions for portability testing are as follows.

 Software should be designed can coded, keeping in mind the portability


requirements.
 Unit testing has been performed on the associated components
 Integration testing has been performed.
 Test environment has been established.
Compatibility Testing:
 It is a type of non-functional testing.
 Compatibility testing is a type of software testing used to ensure compatibility of the
system/application/website built with various other objects such as other web
o se s, ha d a e platfo s, use s i ase if it s e spe ifi t pe of e ui e e t,
such as a user who speaks and can read only a particular language), operating
systems etc. This type of testing helps find out how well a system performs in a
particular environment that includes hardware, network, operating system and
other software etc.
 It is basically the testing of the application or the product built with the computing
environment.
 It tests whether the application or the software product built is compatible with the
hardware, operating system, database or other system software or not.

Documentation
Testing documentation involves the documentation of artifacts that should be developed
before or during the testing of software.
Documentation for software testing helps in estimating the testing efforts required, test
coverage, requirement tracking/tracing, etc. This section describes some of the commonly
used documented artifacts related to software testing such as

 Test Plan
 Test Scenario
 Test Case
 Traceability Matrix

Test Plan:
A test plan outlines the strategy that will be used to test an application, the resources that
will be sued, the test environment in which testing will be performed, and the limitations of
the testing and the schedule of testing activities. Typically the Quality Assurance Team Lead
will be responsible for writing a Test Plan.
A test plan includes the following:

 Introduction to the Test plan document


 Assumptions while testing the application
 List of test cases included in testing the application
 List of features to be tested.
 What sort of approach to use while testing the software
 List of deliverables that need to be tested
 The resources allocated for testing the application
 Any risks involved during the testing process
 A schedule of tasks and milestones to be achieved
Broader attributes are specified in below section.

Attributes of Test Plan:


 Preparation of introduction
 Defining high level Functional Requirements
 Identification of Types of Tests
 Identification of Test Exit Criteria
 Defining Regression testing Strategy
 Defining deliverables
 Organizing Test Teams.
 Establishing Test Environments
 Defining Dependencies.
 Creation of Test Schedules
 Selection of Testing tools
 Defining Defect recording/Tracking procedures
 Establishing change request procedures
 Establishing version control procedures
 Defining configuration Builds Procedures
 Defining project issue resolution procedures
 Establishing reporting procedures
 Defining Approval Procedures.
Although we can create our test plan in many different ways, the above illustration is a sort
of framework that contains almost all types planning considerations essential for a testing
project. We can use this as a checklist for selecting various elements that suit us. We may
like to omit some of the steps to suit our own process, nature of the project, time
constraints & organizational requirements etc.

Test Scenario:
It is one line statement that notifies what area in the application will be tested. Test
Scenarios are used to ensure that all process flows are tested from end to end. A particular
area of an application can have as little as one test scenario to a few hundred scenarios
depending on the magnitude and complexity of the application.
The te s test s e a io a d test ases a e used i te ha gea l , ho e e a test s e a io
has several steps, whereas a test case has single. Viewed from this perspective, test
scenarios are test cases, but they include several test cases and the sequence that they
should be executed. Apart from this, each test is dependent on the output from the
previous test.

Test case:
Test cases involves a set of steps, conditions, and inputs that can be used while performing
testing tasks. The main intent of this activity is to ensure whether a software passes or fails
in terms of its functionality and other aspects. There are many types of test cases such
functional, negative, Interrupts, logical test cases, UI test cases etc.
Furthermore, test cases are written to keep track of the testing coverage of a software.
Generally, there are no formal templates that can be used during test case writing. However,
the following components are always available and included in every test case:

 Test case ID
 Product module
 Product Version
 Revision history
 Purpose
 Assumptions
 Pre-Conditions
 Steps
 Expected outcome
 Actual outcome
 Post-conditions
Many Test cases can be derived from a single test scenario. In addition, sometimes multiple
test cases are written for a single software which are collectively known as test suites.

Traceability Matrix:
Traceability Matrix (also known as Requirement Traceability Matrix-RTM) is as table that is
used to trace the requirements during the Software Development Life Cycle. It can be used
for forward tracing (i.e. from Requirements to Design or coding) or backward (i.e. from
coding to requirements). There are many user-defined template from RTM.
Each requirement in the RTM document is linked with its associated test case so that testing
can be done as per the mentioned requirement. Furthermore, Bug ID is also included and
linked with its associated requirements and test case.
The main goals for this matrix are:

 Make sure the software is developed as per the mentioned requirements.


 Helps in finding the root cause of any bug.
 Helps in tracing the developed documents during different phases of SDLC.

Testing Challenges-Manual and Automation:


Software Testing has lot of challenges both in manual as well as in automation. Generally in
manual testing scenario developers through the build to test team assuming the responsible test
team or tester will pick the build and will come to ask what the build is about? This is the case in
organizations not following so- alled p o esses . Teste is the iddle a et ee de elopi g tea
and the customers, handling the pressure from both the sides.

1) Testing the complete application:


Is it possi le? I thi k i possi le. The e a e illio s of test o i atio s. It s ot possi le to
test each and every combination both in manual as well as in automation testing. If you try
all these combinations you will never ship the product ;-)

2) Misunderstanding of company processes:


Sometimes ou just do t pa p ope atte tio hat the o pa -defined processes are
and these are for what purposes. There are some myths in testers that they should only go
with company processes even these processes are not applicable for their current testing
scenario. This results in incomplete and inappropriate application testing.

3) Relationship with developers:


Big challenge. Requires very skilled tester to handle this relation positively and even by
completing the work in testers way. There are simply hundreds of excuses developers or
testers can make when they are not agree with some points. For this tester also requires
good communication, troubleshooting and analyzing skill.

4) Regression testing:
When project goes on expanding the regression testing work simply becomes uncontrolled.
Pressure to handle the current functionality changes, previous working functionality checks
and bug tracking.

5) Lack of skilled testers:


I ill all this as o g a age e t de isio hile sele ti g o t ai i g teste s fo thei
project task in hand. These unskilled fellows may add more chaos than simplifying the
testing work. This results into incomplete, insufficient and ad-hoc testing throughout the
testing life cycle.

6) Testing always under time constraint:


Hey tester, we want to ship this product by this weekend, are you ready for completion?
When this order comes from boss, tester simply focuses on task completion and not on the
test coverage and quality of work. There is huge list of tasks that you need to complete
within specified time. This includes writing, executing, automating and reviewing the test
cases.

7) Which tests to execute first?


If you are facing the challenge stated in point no 6, then how will you take decision which
test cases should be executed and with what priority? Which tests are important over
others? This requires good experience to work under pressure.

8) Understanding the requirements:


Sometimes testers are responsible for communicating with customers for understanding the
requirements. What if tester fails to understand the requirements? Will he be able to test
the application properly? Definitely No! Testers require good listening and understanding
capabilities.

9) Automation testing:
Many sub challenges – Should automate the testing work? Till what level automation should
be done? Do you have sufficient and skilled resources for automation? Is time permissible
for automating the test cases? Decision of automation or manual testing will need to
address the pros and cons of each process.

10) Decision to stop the testing:


When to stop testing? Very difficult decision. Requires core judgment of testing processes
a d i po ta e of ea h p o ess. Also e ui es o the fl de isio a ilit .

11) One test team under multiple projects:


Challenging to keep track of each task. Communication challenges. Many times results in
failure of one or both the projects.

12) Reuse of Test scripts:


Application development methods are changing rapidly, making it difficult to manage the
test tools and test scripts. Test script migration or reuse is very essential but difficult task.

13) Testers focusing on finding easy bugs:


If organization is rewarding testers based on number of bugs (very bad approach to judge
testers performance the so e teste s o l o e t ate o fi di g eas ugs those do t
require deep understanding and testing. A hard or subtle bug remains unnoticed in such
testing approach.

14) To cope with attrition:


Increasing salaries and benefits making many employees leave the company at very short
career intervals. Managements are facing hard problems to cope with attrition rate.
Challenges – New testers require project training from the beginning, complex projects are
difficult to understand, delay in shipping date!

Myths
Given below are the most of common myths about software testing.

Myth 1 : Testing is Too Expensive


Reality: There is a saying, Pay less for testing during software development or pay more for
maintenance or correction later. Early testing saves both time and cost in many aspects, however
reducing the cost without testing may result in improper design of a software application rendering
the product useless.
Myth 2 : Testing is Time-Consuming
Reality: During the SDLC phases, testing is never a time-Consuming process, however diagnosing and
fixing the errors identified during proper testing is a time-Consuming but productive activity.

Myth 3 : Only Fully Developed Products are Tested


Reality: No Doubts, Testing depends on the source code but reviewing requirements and developing
test cases is independent from the developed code, however iterative or incremental approach as a
development life cycle model may reduce the dependency of testing on the fully developed software.

Myth 4 : Complete Testing is Possible


Reality: It becomes an issue when a client or tester thinks that complete testing is possible. It is
possible that all paths have been tested by the team but occurrence of complete testing is never
possible. There might be some scenarios that are never executed by the test team or the client
during the software development life cycle and may be executed once the project has been
deployed.

Myth 5 : A Tested Software is Bug-Free


Reality: This is very common myth that the clients, project managers, and the management team
believes in. no one can claim with absolute certainty that a software application is 100% bug-free
even if a tester with superb testing skills has tested the application.

Myth 6 : Missed Defects are due to Testers


Reality: It is not a correct approach to blame testers for bugs that remain in the application even
after testing has been performed. This myth relates to Time, Cost, and Requirements changing
constraints. However the test strategy may also result in the bugs being missed by the testing team.

Myth 7 : Testers are Responsible for Quality of Product.


Reality: It is very common misinterpretation that only testers or the testing team should be
espo si le fo p odu t ualit . Teste s espo si ilities i lude the ide tifi atio of ugs to the
stakeholders and then it is their decision whether they will fix the bugs or release the software.
Releasing the software at the time puts more pressure on the testers, as they will be blamed for any
errors.

Myth 8 : Test Automation should be used wherever Possible to


Reduce Time
Reality: Yes, it is true that Test Automation reduces the testing time, but it is not possible to start
test automation at any time during software development. Test automation should be started when
the software has been manually tested and is stable to some extent. Moreover, test automation can
never be used if requirements keep changing.

Myth 9 : Anyone Can Test Software Application.


Reality: People outside the IT industry think and even believe that anyone can test a software and
testing is not a creative job. However testers know very well that this is myth. Thinking alternative
scenarios, try to crash a software with the intent to explore potential for the person who developed
it.

Myth 10 : A Tester’s Only Task is to Find Bugs.


Reality: Finding bugs in a software is the task of the testers, but at the same time, they are domain
experts of the particular software. Developers are only responsible for the specific component or
area that is assigned to them but testers understand the overall workings of the software, what the
dependencies are, and the impacts of one module on another module.

Capability Maturity Model (CMM)


Capability Maturity Model is a bench-mark for measuring the maturity of an o ga izatio s soft a e
p o ess. It is a ethodolog used to de elop a d efi e a o ga izatio s soft a e development
process. CMM can be used to assess an organization against a scale of five process maturity levels
based on certain Key Process Areas (KPA). It describes the maturity of the company based upon the
project the company is dealing with and the clients. Each level ranks the organization according to its
standardization of processes in the subject area being assessed.

A maturity model provides:

 A place to start
 The e efit of a o u it s p io e pe ie es
 A common language and a shared vision
 A framework for prioritizing actions
 A way to define what improvement means for your organization

In CMMI models with a staged representation, there are five maturity levels designated by the
numbers 1 through 5 as shown below:

1. Initial
2. Managed
3. Defined
4. Quantitatively Managed
5. Optimizing
Maturity levels consist of a predefined set of process areas. The maturity levels are measured by the
achievement of the specific and generic goals that apply to each predefined set of process areas. The
following sections describe the characteristics of each maturity level in detail.

Maturity Level 1 – Initial: Company has no standard process for software development. Nor does it
have a project-tracking system that enables developers to predict costs or finish dates with any
accuracy.

In detail we can describe it as given below:

At maturity level 1, processes are usually ad hoc and chaotic.

The organization usually does not provide a stable environment. Success in these organizations
depends on the competence and heroics of the people in the organization and not on the use of
proven processes.

Maturity level 1 organizations often produce products and services that work but company has no
standard process for software development. Nor does it have a project-tracking system that enables
developers to predict costs or finish dates with any accuracy.

Maturity level 1 organizations are characterized by a tendency to over commit, abandon processes
in the time of crisis, and not be able to repeat their past successes.

Maturity Level 2 – Managed: Company has installed basic software management processes and
controls. But there is no consistency or coordination among different groups.

In detail we can describe it as given below:

At maturity level 2, an organization has achieved all the specific and generic goals of the maturity
level 2 process areas. In other words, the projects of the organization have ensured that
requirements are managed and that processes are planned, performed, measured, and controlled.
The process discipline reflected by maturity level 2 helps to ensure that existing practices are
retained during times of stress. When these practices are in place, projects are performed and
managed according to their documented plans.

At maturity level 2, requirements, processes, work products, and services are managed. The status
of the work products and the delivery of services are visible to management at defined points.

Commitments are established among relevant stakeholders and are revised as needed. Work
products are reviewed with stakeholders and are controlled.

The work products and services satisfy their specified requirements, standards, and objectives.

Maturity Level 3 – Defined: Company has pulled together a standard set of processes and controls
for the entire organization so that developers can move between projects more easily and
customers can begin to get consistency from different groups.

In detail we can describe it as given below:

At maturity level 3, an organization has achieved all the specific and generic goals.

At maturity level 3, processes are well characterized and understood, and are described in standards,
procedures, tools, and methods.

A critical distinction between maturity level 2 and maturity level 3 is the scope of standards, process
descriptions, and procedures. At maturity level 2, the standards, process descriptions, and
procedures may be quite different in each specific instance of the process (for example, on a
particular project). At maturity level 3, the standards, process descriptions, and procedures for a
p oje t a e tailo ed f o the o ga izatio s set of sta da d p o esses to suit a pa ti ula p oje t o
organizational unit.

The o ga izatio s set of sta da d p o esses i ludes the p o esses add essed at atu it le el 2
and maturity level 3. As a result, the processes that are performed across the organization are
consistent except for the differences allowed by the tailoring guidelines.

Another critical distinction is that at maturity level 3, processes are typically described in more detail
and more rigorously than at maturity level 2.

At maturity level 3, processes are managed more proactively using an understanding of the
interrelationships of the process activities and detailed measures of the process, its work products,
and its services.

Maturity Level 4 – Quantitatively Managed: In addition to implementing standard processes,


company has installed systems to measure the quality of those processes across all projects.

In detail we can describe it as given below:

At maturity level 4, an organization has achieved all the specific goals of the process areas assigned
to maturity levels 2, 3, and 4 and the generic goals assigned to maturity levels 2 and 3.

At maturity level 4 Sub-processes are selected that significantly contribute to overall process
performance. These selected sub-processes are controlled using statistical and other quantitative
techniques.

Quantitative objectives for quality and process performance are established and used as criteria in
managing processes. Quantitative objectives are based on the needs of the customer, end users,
organization, and process implementers. Quality and process performance are understood in
statistical terms and are managed throughout the life of the processes.

For these processes, detailed measures of process performance are collected and statistically
analysed. Special causes of process variation are identified and, where appropriate, the sources of
special causes are corrected to prevent future occurrences.

Quality and process performance measures are incorporated into the organizations measurement
repository to support fact-based decision making in the future.

A critical distinction between maturity level 3 and maturity level 4 is the predictability of process
performance. At maturity level 4, the performance of processes is controlled using statistical and
other quantitative techniques, and is quantitatively predictable. At maturity level 3, processes are
only qualitatively predictable.

Maturity Level 5 – Optimizing: Company has accomplished all of the above and can now begin to see
patterns in performance over time, so it can tweak its processes in order to improve productivity
and reduce defects in software development across the entire organization.

In detail we can describe it as given below:

At maturity level 5, an organization has achieved all the specific goals of the process areas assigned
to maturity levels 2, 3, 4, and 5 and the generic goals assigned to maturity levels 2 and 3.

Processes are continually improved based on a quantitative understanding of the common causes of
variation inherent in processes.

Maturity level 5 focuses on continually improving process performance through both incremental
and innovative technological improvements.

Quantitative process-improvement objectives for the organization are established, continually


revised to reflect changing business objectives, and used as criteria in managing process
improvement.

The effects of deployed process improvements are measured and evaluated against the quantitative
process-i p o e e t o je ti es. Both the defi ed p o esses a d the o ga izatio s set of sta da d
processes are targets of measurable improvement activities.

Optimizing processes that are agile and innovative depends on the participation of an empowered
workforce aligned with the business values and objectives of the organization.

The o ga izatio s a ilit to apidl espo d to ha ges a d oppo tu ities is e ha ed fi di g


ways to accelerate and share learning. Improvement of the processes is inherently part of
e e od s ole, esulti g i a le of o ti ual i p o e e t.

A critical distinction between maturity level 4 and maturity level 5 is the type of process variation
addressed. At maturity level 4, processes are concerned with addressing special causes of process
variation and providing statistical predictability of the results. Though processes may produce
predictable results, the results may be insufficient to achieve the established objectives. At maturity
level 5, processes are concerned with addressing common causes of process variation and changing
the process (that is, shifting the mean of the process performance) to improve process performance
(while maintaining statistical predictability) to achieve the established quantitative process-
improvement objectives.