Sie sind auf Seite 1von 45

INTERVIEW QUESTIONS

1. What is software testing ?

A Software Testing is a process of evaluating a system by manual or automatic and verifies that it
satisfies specified requirements or identifies differences between expected and actual results.

Software development project is one in which a software project or product is to fulfill some
needs of a customer should be developed and delivered within specified cost and time period.

Actually Testing cannot show the absence of defects/errors, it demonstrates the conformance to
specifications and is an indication of Software reliability and Quality.

Testing analyze a program with the intent of finding problems and errors that measures system
functionality and quality. It also evaluates the attributes and capabilities of a program and access
whether they achieve the required results.

Need of software testing :-

Software Testing is important as it may cause mission failure, impact on operational performance
and reliability if not done properly. Effective software testing helps to deliver quality software
products that satisfy user’s requirements, needs and expectations. If done poorly, defects are
found during operations, it results in high maintenance cost and user dissatisfaction.

The main objective of testing is to help clearly describe system behaviors and to find defects in
requirements, design, documentation, and code as early as possible. The test process should be
such that it should reduce the number of defects in the software product that will be delivered to
the customer.

Goals of a software tester :-

The software tester is to find bugs.

The software tester is to find bugs and find them as early as possible.
The software tester is to find bugs and find them as early as possible and Make sure they get
fixed.

Principles of sofware testing :-

Testing shows the presence of defects

Exhaustive testing is impossible

Early detection of defects

Defect clustering

Pesticide paradox

Context depending

Shows error fallacy

2. What is the difference between white box testing and black box testing ?

4. Black box tesing and White box testing techniques ?

5. Difference between Boundary value analysis and Equivalence partitioning ?

White Box testing :

It is also known as structural testing. Its Done mainly by developers. A tester with programming
knowledge can also perform the testing.

Different names for the testing can be : open box testing, glass box testing, clear box testing.

White box testing techniques :

Structural coverage : It executes all the statements atleast once.

Decision/Condition/Branch coverage : Each branch should be executed once. Egs: loops, if else

Path coverage : Covering all paths from start to end. Egs: both true and false part
Black Box testing :

It is also known as functional testing. Its maily done testers.

Different names for black box testing can be : Behavioural testing, closed testing, opaque testing

Black box testing techniques can be :

Equivalence partition : Equivalence partitioning is a technique for testing equivalence classes


rather than undertaking exhaustive testing of each value of the larger class. For example , a
program which edits credit limits within a given range($10,000-$15,000) would have three
equivalence classes:

Less than $10,000 (Invalid)

Between $10,000 and $15,000(Valid)

Greater than $15,000 (Invalid)

2. Boundary value analysis : It focus only on the boundary values. That is the main difference

between equivalence partion and boundary value analysis.

Boundary analysis test can be :

Low boundary plus or minus one ($9,999and $10,001)

On the boundary ($10,000 and $15,000)

Upper boundary plus or minus one ($14,999 and $15,001)

3. Error guessing : Its performed by experienced testers who can what the error can be

White box testing Black box testing

1. Internal structure is known to the tester who 1. Without knowing internal structure or code
is going to test the s/w of the program

2. Testing is applied on lower levels of testing 2. Applied on higher levels of testing like ST,
like unit testing, IT UAT

3.carried by developers 3. carried by testers


4. Programming knowledge is required 4. Not required

5. Means structural testing or interior testing 5. Functional or exterior testing

3. Types of Defects ?

A defect is a variance from a desired product attribute. Two categories of defects are :

Variance from product specifications

Variance from customer/user expectation.

Variance from Product Specifications – The product built varies from the product specified. For
example, the specifications may say that is to be added to b to produce. If the algorithm in the
built product varies from that specification, it is considered to be defective.

Variance from customer/user expectation – The variance is something that user wanted that is not
in the built product, but also was not specified to be included in the built product. The missing
piece may be a specification or requirement, or the method by which the requirement was
implemented may be unsatisfactory.

Defects generally fall into one of the following three categories:

Wrong – The specifications have been implemented incorrectly. This defect is a variance from
customer / user specification.

Missing – A specified or wanted requirement is not in the built product. This can be a variance
from specification, an indication that the specification was not implemented, or a requirement of
the customer identified during or after the product was built.

Extra- A requirement incorporated into the product that was not specified. This is Always
variance from specifications, but May the user of the product desire and Attribute. However, it is
considered a defect

6. What is alpha testing and beta testing ?


UAT – User Acceptance Testing. It is a testing where user checks whether the s/w is built as per
requirements. Here there is no role tester or developer. Its completely dependent on users.

Two ways of UAT:-

Alpha testing : Testing done on company premises. Its performed to identify all possible bugs
before releasing the product to everyday users or public. Its an internal UAT.

Beta testing : Testing done client location. Its performed by “real users” in a “real environment”.
Its an external UAT.

Alpha testing Beta testing

1. Performed by internal employers of the 1. Performed by clients or end users


organization

2. Performed at developers site 2. At client location

3. Reliability and security testing are not done 3. Performs both reliabilty and security testing

4. It includes both white and black box testing 4. only black box testing

5. Ensures quality of the product before 5. Also concentrates on quality of the product
moving to beta testing but gathers users input and ensures that the
product is ready for real time users

7. What is static testing & dynamic testing ?

Under Static Testing code is not executed. Rather it manually checks the code, requirement
documents, and design documents to find errors. Hence, the name "static".

Main objective of this testing is to improve the quality of software products by finding errors in
early stages of the development cycle. This testing is also called as Non-execution technique or
verification testing.
Static testing involves manual or automated reviews of the documents. This review, is done
during initial phase of testing to catch defect early in STLC. It examines work documents and
provides review comments .

Work document can be of following:


Requirement specifications
Design document
Source Code
Test Plans
Test Cases
Under Dynamic Testing code is executed. It checks for functional behavior of software system ,
memory/cpu usage and overall performance of the system. Hence the name "Dynamic"
Main objective of this testing is to confirm that the software product works in conformance with
the business requirements. This testing is also called as Execution technique or validation testing.
Dynamic testing executes the software and validates the output with the expected outcome.
Dynamic testing is performed at all levels of testing and it can be either black or white box
testing.

Static Testing Techniques:

Informal Reviews: This is one of the type of review which doesn't follow any process to find
errors in the document. Under this technique , you just review the document and give informal
comments on it.
Technical Reviews: A team consisting of your peers, review the technical specification of the
software product and checks whether it is suitable for the project. They try to find any
discrepancies in the specifications and standards followed. This review concentrates mainly on
the technical document related to the software such as Test Strategy, Test Plan and requirement
specification documents.
Walkthrough: The author of the work product explains the product to his team. Participants can
ask questions if any. Meeting is led by the author. Scribe makes note of review comments
Inspection: The main purpose is to find defects and meeting is led by trained moderator. This
review is a formal type of review where it follows strict process to find the defects. Reviewers
have checklist to review the work products .They record the defect and inform the participants to
rectify those errors.
Static code Review: This is systematic review of the software source code without executing the
code. It checks the syntax of the code, coding standards, code optimization, etc. This is also
termed as white box testing .This review can be done at any point during development.
Dynamic Testing Techniques:

Unit Testing:Under unit testing , individual units or modules is tested by the developers. It
involves testing of source code by developers.
Integration Testing: Individual modules are grouped together and tested by the developers. The
purpose is to determine that modules are working as expected once they are integrated.
System Testing: System testing is performed on the whole system by checking whether the
system or application meets the requirement specification document.
Also , Non-functional testing like performance, security testing fall under the category of
dynamic testing.

Static Testing Dynamic testing

Testing done without executing the program With execution of code

This testing does verification process Validation process

Static testing is about prevention of defects Dynamic testing is about finding and fixing the
defects

Static testing gives assessment of code and Dynamic testing gives bugs/bottlenecks in the
documentation software system.

Static testing involves checklist and process to Dynamic testing involves test cases for
be followed execution

This testing can be performed before After compilation


compilation
Static testing covers the structural and Static testing covers the structural and
statement coverage testing statement coverage testing

Cost of finding defects and fixing is less High

Return on investment will be high as this Return on investment will be low as this
process involved at early stage process involves after the development phase

More reviews comments are highly More defects are highly recommended for
recommended for good quality good quality.

Requires loads of meetings Lesser meetings

8. What is the difference between re-testing and regression testing?

Regression testing is carried out to ensure that the existing functionality is working fine and
there are no side effects of any new change or enhancements done in the application. In other
words, Regression Testing checks to see if new defects were introduced in previously existing
functionality.

Retesting is carried out in s/w testing to ensure that a particular defect has been fixed and it’s the
functionality working as expected.

Regression testing Retesting

Regression testing is done to find out the issues Retesting is done to confirm whether the failed
which may get introduced because of any test cases in the final execution are working
change or modification in the application. fine or not after the issues have been fixed.

The purpose of regression testing is that any The purpose of retesting is to ensure that the
new change in the application should NOT particular bug or issue is resolved and the
introduce any new bug in existing functionality is working as expected.
functionality.
Verification of bugs are not included in the Included
regression testing.

Regression testing can be done in parallel with Retesting is of high priority so it’s done before
retesting. the regression testing.

For regression testing test cases can be Cannot be automated


automated.

n case of regression testing the testing style is Done in planned way


generic

During regression testing even the passed test Only failed test cases are executed
cases are executed.

Regression testing is carried out to check for Retesting is carried out to ensure that the
unexpected side effects. original issue is working as expected.

Regression testing is done only when any new Retesting is executed in the same environment
feature is implemented or any modification or with same data but in new build.
enhancement has been done to the code.

Test cases of regression testing can be obtained Can be obtained only when testing starts
from the specification documents and bug
reports.

9. How much testing is enough?

No Software can be 100% defect free. We can only reduce the no. of defects.

Team agree on current testings performed and their results

Release date is more important than more tests to perform

Team is well-informed on testing status

Testing budget is running out


10. When should testing be start and stop ?

An early start to testing reduces the cost and time to rework and produce error-free software that
is delivered to the client. However in Software Development Life Cycle (SDLC), testing can be
started from the Requirements Gathering phase and continued till the deployment of the
software. It also depends on the development model that is being used. For example, in the
Waterfall model, formal testing is conducted in the testing phase; but in the incremental model,
testing is performed at the end of every increment/iteration and the whole application is tested at
the end.
Testing is done in different forms at every phase of SDLC:
During the requirement gathering phase, the analysis and verification of requirements are also
considered as testing.
Reviewing the design in the design phase with the intent to improve the design is also considered
as testing.
Testing performed by a developer on completion of the code is also categorized as testing.
It is difficult to determine when to stop testing, as testing is a never-ending process and no one
can claim that a software is 100% tested. The following aspects are to be considered for stopping
the testing process:
Testing Deadlines
Completion of test case execution
Completion of functional and code coverage to a certain point
Bug rate falls below a certain level and no high-priority bugs are identified
Management decision

11. What is verification and validation?

Verification is the process of evaluating products of a development phase to find out whether
they meet the specified requirements.

Validation is the process of evaluating software at the end of the development process to
determine whether software meets the customer expectations and requirements.
Verification Validation

Are we building the system right? Are we building the right system?

The objective of Verification is to make sure The objective of Validation is to make sure that

requirements and design specifications. requirements, and check whether the

specifications were correct in the first place.

Following activities are involved in Following activities are involved in Validation:


Verification: Reviews, Meetings and Testing
like black box testing, white box testing, gray box testing etc.
Inspections.

Verification is carried out by QA team to check Validation is carried out by testing team. (QC-

whether implementation software is as per Quality Control)


specification document or not.

Execution of code is not comes under Execution of code is comes under Validation.
Verification.

Verification process explains whether the Validation process describes whether the
outputs are according to inputs or not. software is accepted by the user or not.

Verification is carried out before the Validation activity is carried out just after the
Validation. Verification

Following items are evaluated during Following item is evaluated during Validation:
Verification: Plans, Requirement
Specifications, Design Specifications, Code, Actual product or Software under test.
Test Cases etc,

Cost of errors caught in Verification is less than Cost of errors caught in Validation is more than
errors found in Validation. found in verification

It is basically manually checking the It is basically checking of developed program


documents and files like requirement based on the requirement specifications
specifications etc.
NOTE :

1. Both Verification and Validation are essential and balancing to each other.

2. Different error filters are provided by each of them.

3. Both are used to finds a defect in different way, Verification is used to identify the errors in

requirement specifications & validation is used to find the defects in the implemented
Software

application.

12. What is Difference between QA, QC and testing?

Quality is defined as meeting the customer’s requirements in the first time and every time .

Quality is much more than the absence of defects which allows us to meet customer’s
expectations.

Quality can only be seen through the eyes of the customers. An understanding of the ustomer’s
expectations (effectiveness) is the first step, and then exceeding those expectations (efficiency) is
required .

Quality can only be achieved by the continuous improvement ,of all systems and processes in the
organization, not only the production of products and services but also the design, development,
service, purchasing, administration, and , indeed all aspects of the transaction with the customer.

Quality Assurance:

Quality Assurance is a planned and systematic set of activities necessary to provide adequate
confidence that products and services will conform to specified requirements and meet user
needs. Quality assurance is staff function, responsible for implementing the quality policy
defined through the development and continuous improvement of software development process.

Quality Control:
Quality Control is the process by which product quality is compared with applicable standards
and the action taken when non-conformance is detected. Quality control is line function, and
work is done within a process to ensure that work product conforms to standards and/or
requirements.

Quality Assurance Quality Control

1. Quality Assurance helps us to build 1. Quality Control helps us to implements the

processes. build processes.

2. It is the Duty of the complete team. 2. It is only the Duty of the Testing team.

3. QA comes under the category of 3. QC comes under the category of Validation.

Verification.

4. Quality Assurance is considered as the 4. Quality Control is considered as the product

process oriented exercise. oriented exercise.

5. It prevents the occurrence of issues, bugs or 5. It always detects, corrects and reports the

defects in the application. bugs or defects in the application.

6. It does not involve executing the program 6. It always involves executing the program or

or code (Static testing ). code (Dynamic testing ).

7. It is done before Quality Control. 7. It is done only after Quality Assurance

activity is completed.

8. It can catch an error and mistakes that 8. It can catch an error that Quality Assurance

Quality Control cannot catch, that is why cannot catch, that is why considered as High

considered as Low Level Activity. Level Activity.

9. It is human based checking of documents 9. It is computer based execution of program or

or files. code.

10. Quality Assurance means Planning done 10. Quality Control Means Action has taken on
for doing a process.

the process by execute them.

11. Its main focuses on preventing Defects or 11. Its main focuses on identifying Defects or

Bugs in the system. Bugs in the system.

12. It is not considered as a time consuming 12. It is always considered as a time

activity. consuming activity.

13. Quality Assurance makes sure that you are 13. Quality Control makes sure that whatever

doing the right things in the right way that is we have done is as per the requirement means

the reason it is always comes under the it is as per what we have expected, that is the

category of verification activity. reason it is comes under the category of

validation activity.

14. QA is Pro-active means it identifies 14. QC is Reactive means it identifies the

weaknesses in the processes. defects and also corrects the defects or bugs

also.

13. What does entry and exit criteria mean in a project?

14. What is Entry Criteria and Exit Criteria Software Testing?

In simple words, Entry & Exit criteria means of Start and Stop point of any phase. Output of
Entry criteria is the Input of Exit criteria.

Entry criteria – It ensures that the proper environment is in place to start test process of a
projecte.g. All hardware/software platforms are successfully installed and functional, Test plan,
test case are reviewed and signed off.
Exit Criteria - It ensures that the project is complete before exiting the test stage.E.g. Planned
deliverables are ready, High severity defects are fixed, Documentation is complete and updated.
Entry Criteria :

Approved design and requirement documents.

Test plan must be reviewed and signed off by TestLead/ Manager.

All developed code must be unit tested. Unit and Link testing must be completed and signed off
by development team.

All human resources must be assigned and available with the necessary Test bed.

Application must be installed and configured similar to the customer environment independently
(segregated from development environment) to start test execution.

Test cases for Functional testing are created and reviewed.

Exit Criteria :

All high priority defects must be fixed and re-tested.

If any medium or low-priority defects are outstanding -Project Manager must sign off the
implementation risk as acceptable.

Test cases execution with test results.

Bug Report.

Test Report and Release approved by the Test Manager.

Software Release Checklist

For Egs :

For Integration testing, the Entry and Exit criteria are as follows:-

All defetcs at Unit Testing are closed.

Unit Test cases are uploaded in Defect Tracking tool.


Test Scenairos provided by Developers(As per as company) etc.

15. What is the difference between a defect and a failure?


Recheck
Error: The mistakes made by programmer is known as an ‘Error’. This could happen because
of the following reasons:
– Because of some confusion in understanding the functionality of the software
– Because of some miscalculation of the values
– Because of misinterpretation of any value, etc.
Defect: The bugs introduced by programmer inside the code are known as a defect. This can
happen because of some programatical mistakes.
Failure: If under certain circumstances these defects get executed by the tester during the testing
then it results into the failure which is known as software failure.

26. What is Showstopper/Blocker/Critical Defect?

Showstopper defect can be described as a bug which stops/restricts the testing to move ahead
with that specific functionality or module. It simply truncate the test execution.

Example:
A site with is used for photo editing task is being tested. In the home page it ask for browse a
photo from internet or the local computer. When we select a photo and press the upload button.
Then it should take to the next page for editing purpose. But it shows a error message with going
to the next page which terminate/stopped the testing. This is what we call showstopper defect.

16. What are different testing levels?

Testing levels are basically to identify missing areas and prevent overlap and repetition between
the development life cycle phases. In software development life cycle models there are defined
phases like requirement gathering and analysis, design, coding or implementation, testing and
deployment. Each phase goes through the testing. Hence there are various levels of testing. The
various levels of testing are:

1. Unit testing: It is basically done by the developers to make sure that their code is
working fine and meet the user specifications. They test their piece of code which they
have written like classes, functions, interfaces and procedures.

2. Integration testing: Integration testing is done when two modules are integrated, in
order to test the behavior and functionality of both the modules after integration. Below
are few types of integration testing:

 Big bang integration testing

 Top down

 Bottom up

Bing Bang : all components are combined at once or simultaneously.

Top down : all modules are combined from higher level to lower level. If any of the
module is not available during top down integration, a dummy module called stubs are
created. Egs : In an online shopping, the classification under kids wear are not given. But
we can guess the classifications under kids wear and so a dummy module can be created.

Bottom up : all modules are combined from lower level to higher level. Here the dummy
modules are called drivers.

3. System testing: In system testing the testers basically test the compatibility of the
application with the system.

4. Acceptance testing: Acceptance testing are basically done to ensure that the
requirements of the specification are met.

Two ways of UAT :

 Alpha testing: Alpha testing is done at the developers site. It is done at the end of
the development process

 Beta testing: Beta testing is done at the customers site. It is done just before the
launch of the product.

17. Bug Life Cycle, STLC, SDLC?


Defect life cycle is a cycle which a defect goes through during its lifetime. It starts when defect
is found and ends when a defect is closed, after ensuring it’s not reproduced. Defect life cycle is
related to the bug found during testing.

The bug has different states in the Life Cycle. The Life cycle of the bug can be shown
diagrammatically as follows:

Bug or defect life cycle includes following steps or status:

1. New: When a defect is logged and posted for the first time. It’s state is given as new.

2. Assigned: After the tester has posted the bug, the lead of the tester approves that the bug
is genuine and he assigns the bug to corresponding developer and the developer team. It’s
state given as assigned.

3. Open: At this state the developer has started analyzing and working on the defect fix.

4. Fixed: When developer makes necessary code changes and verifies the changes then
he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.
5. Pending retest: After fixing the defect the developer has given that particular code for
retesting to the tester. Here the testing is pending on the testers end. Hence its status is
pending retest.

6. Retest: At this stage the tester do the retesting of the changed code which developer has
given to him to check whether the defect got fixed or not.

7. Verified: The tester tests the bug again after it got fixed by the developer. If the bug is
not present in the software, he approves that the bug is fixed and changes the status to
“verified”.

8. Reopen: If the bug still exists even after the bug is fixed by the developer, the tester
changes the status to “reopened”. The bug goes through the life cycle once again.

9. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no
longer exists in the software, he changes the status of the bug to “closed”. This state
means that the bug is fixed, tested and approved.

10. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the
bug, then one bug status is changed to “duplicate“.

11. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the
state of the bug is changed to “rejected”.

12. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in
next releases. The reasons for changing the bug to this state have many factors. Some of
them are priority of the bug may be low, lack of time for the release or the bug may not
have major effect on the software.

13. Not a bug: The state given as “Not a bug” if there is no change in the functionality of
the application. For an example: If customer asks for some change in the look and field of
the application like change of colour of some text then it is not a bug but just some
change in the looks of the application.

STLC {Software testing life cycle}

Software Testing Life Cycle is a testing process which is executed in a sequence, in order to meet
the quality goals. It is not a single activity but it consists of many different activities which are
executed to achieve a good quality product. There are different phases in STLC which are given
below:

Requirement Analysis
This is the very first phase of Software testing Life cycle (STLC). In this phase testing team goes
through the Requirement document with both Functional and Non-Functional details in order to
identify the testable requirements. In case of any confusion the QA team may setup a meeting
with the clients and the stakeholders (Technical Leads, Business Analyst, System Architects and
Client etc) in order to clarify their doubts.

Activities to be done in Requirement analysis phase are given below:


• Analyzing the System Requirement specifications from the testing point of view
• Preparation of RTM that is Requirement Traceability Matrix
• Identifying the testing techniques and testing types
• Prioritizing the feature which need focused testing
• Identifying the details about the testing environment where actual testing will be done

Test Plan Preparation


Test Planning phase starts soon after the completion of the Requirement Analysis phase. In this
phase
the QA manager or QA Lead will prepare the Test Plan and Test strategy documents. As per these
documents they will also come up with the testing effort estimations.

Activities to be done in Test Planning phase are given below:


• Estimation of testing effort
• Selection of Testing Approach
• Preparation of Test Plan, Test strategy documents
• Resource planning and assigning roles and responsibility to them

Tool Identification
Identifying which way to choose , that is Manual or Automation.

Test Environment Setup


Test environment decides the software and hardware conditions under which a work product is
tested.
Test environment set-up is one of the critical aspects of testing process .

Test Case Preparation


In this phase the testing team write test cases. They also write scripts for automation if required.

Test Plan Execution

During this phase test team will carry out the testing based on the test plans and the test cases
prepared. Bugs will be reported back to the development team for correction and retesting will be
performed.

Activities:
• Execute tests as per plan
• Document test results, and log defects for failed cases
• Map defects to test cases in RTM
• Retest the defect fixes
• Track the defects to closure
Deliverables:
• Completed RTM with execution status
• Test cases updated with results
• Defect reports

Defect Tracking & Reporting


Tracking in which stage the error is(status) & reporting.Unique id is generated for defect after
reporting defect; that ID is used for future reference. Identifies the defect status by “ Bug or
defect life
cycle”

Test Report Preparation


Final test report contain a summary of test activities.Testing team will meet , discuss and analyze
testing artifacts to identify strategies that have to be implemented in future, taking lessons from
the
current test cycle. The idea is to remove the process bottlenecks for future test cycles and share
best
practices for any similar projects in future.

SDLC{Software development life cycle}

There are various software development approaches defined and designed which are
used/employed during development process of software, these approaches are also referred as
“Software Development Process Models” (e.g. Waterfall model, incremental model, V-
model, iterative model, RAD model, Agile model, Spiral model, Prototype model etc.). Each
process model follows a particular life cycle in order to ensure success in process of software
development.

Software life cycle models describe phases of the software cycle and the order in which those
phases are executed.

There are following six phases in every Software development life cycle model:

1. Requirement gathering and analysis

2. Design

3. Coding

4. Testing

5. Implementation
6. Maintenance

1) Requirement gathering and analysis: Business requirements are gathered in this phase.
This phase is the main focus of the project managers and stake holders. Meetings with managers,
stake holders and users are held in order to determine the requirements like; Who is going to use
the system? How will they use the system? What data should be input into the system? What
data should be output by the system? These are general questions that get answered during a
requirements gathering phase. After requirement gathering these requirements are analyzed for
their validity and the possibility of incorporating the requirements in the system to be
development is also studied.

Finally, a Requirement Specification document is created which serves the purpose of guideline
for the next phase of the model. The testing team follows the Software Testing Life Cycle and
starts the Test Planning phase after the requirements analysis is completed.

2) Design: In this phase the system and software design is prepared from the requirement
specifications which were studied in the first phase. System Design helps in specifying hardware
and system requirements and also helps in defining overall system architecture. The system
design specifications serve as input for the next phase of the model. In this phase the testers
comes up with the Test strategy, where they mention what to test, how to test.

3) Coding: On receiving system design documents, the work is divided in modules/units and
actual coding is started. Since, in this phase the code is produced so it is the main focus for the
developer. This is the longest phase of the software development life cycle.

4) Testing: After the code is developed it is tested against the requirements to make sure that
the product is actually solving the needs addressed and gathered during the requirements phase.
During this phase all types of functional testing like unit testing, integration testing, system
testing, acceptance testing are done as well as non-functional testing are also done.

5) Implementation: After successful testing the product is delivered / deployed to the customer
for their use. As soon as the product is given to the customers they will first do the beta testing.
If any changes are required or if any bugs are caught, then they will report it to the engineering
team. Once those changes are made or the bugs are fixed then the final deployment will happen.

6) Maintenance: Once when the customers starts using the developed system then the actual
problems comes up and needs to be solved from time to time. This process where the care is
taken for the developed product is known as maintenance.

18. What are the fields in a bug report?

Defects are recorded for four major purposes:


 To correct the defect
 To report status of the application
 To gather statistics used to develop defect expectations in future applications
 To improve the software development process

At a minimum, the tool selected should support the recording and communication of all
significant information about a defect for example, a defect log could include.
 Defect ID number
 Descriptive defect name and type
 Source of defect- test case or other source
 Defect severity
 Defect priority
 Defect status (e.g. open , fixed, closed)more robust tools provide a status history for the
defect
 Date and time tracking for either the most recent status change , or for each change in the
status history
 Detailed description , including the steps necessary to reproduce the defect
 Component or program where defect was found
 Screen prints, logs, etc. that will aid the developer in resolution process
 Stage of origination
 Person assigned to research or correct the defect

19. What is Performance Testing? Stress / Load Testing?

Performance testing is the testing which is performed to ascertain how the components of a
system are performing under a particular given situation. Resource usage, scalability, and
reliability of the product are also validated under this testing. This testing is the subset of
performance engineering, which is focused on addressing performance issues in the design and
architecture of software product.
Performance testing, a non-functional testing technique performed to determine the system
parameters in terms of responsiveness and stability under various workload.

Load testing :
• Load testing is a type of non-functional testing.
• A load test is type of software testing which is conducted to understand the behavior of the
application under a specific expected load.
• Load testing is performed to determine a system’s behavior under both normal and at peak
conditions.
• It helps to identify the maximum operating capacity of an application as well as any bottlenecks
and determine which element is causing degradation. E.g. If the number of users are increased
then how much CPU, memory will be consumed, what is the network and bandwidth response
time.
• Load testing can be done under controlled lab conditions to compare the capabilities of
different systems or to accurately measure the capabilities of a single system.
• Load testing involves simulating real-life user load for the target application. It helps you
determine how your application behaves when multiple users hits it simultaneously.
• Load testing differs from stress testing which evaluates the extent to which a system keeps
working when subjected to extreme work loads or when some of its hardware or software has
been compromised.
• The primary goal of load testing is to define the maximum amount of work a system can handle
without significant performance degradation.

Examples of load testing include:


• Downloading a series of large files from the internet.
• Running multiple applications on a computer or server simultaneously.
• Assigning many jobs to a printer in a queue.
• Subjecting a server to a large amount of traffic.
• Writing and reading data to and from a hard disk continuously.

Stress testing :
Stress testing a Non-Functional testing technique that is performed as part of performance
testing. During stress testing, the system is monitored after subjecting the system to overload
to ensure that the system can sustain the stress.
The recovery of the system from such phase (after stress) is very critical as it is highly likely to
happen in production environment.
Reasons for conducting Stress Testing:
• It allows the test team to monitor system performance during failures.
• To verify if the system has saved the data before crashing or NOT.
• To verify if the system prints meaning error messages while crashing or did it print some
random exceptions.
• To verify if unexpected failures do not cause security issues.
Stress Testing - Scenarios:
• Monitor the system behaviour when maximum number of users logged in at the same time.
• All users performing the critical operations at the same time.
• All users Accessing the same file at the same time.
• Hardware issues such as database server down or some of the servers in a server park crashed.

20. What is the difference between latent and masked defects?

Latent Defect is one which has been in the system for a long time; but is discovered now. i.e. a
defect which has been there for a long time and should have been detected earlier is known as
Latent Defect. One of the reasons why Latent Defect exists is because exact set of conditions
haven’t been met.

 Latent bug is an existing uncovered or unidentified bug in a system for a period of time.
The bug may have one or more versions of the software and might be identified after its
release.
 The problems will not cause the damage currently, but wait to reveal themselves at a later
time.
 The defect is likely to be present in various versions of the software and may be detected
after the release.
 E.g February has 28 days. The system could have not considered the leap year which
results in a latent defect.
 These defects do not cause damage to the system immediately but wait for a particular
event sometime to cause damage and show their presence.

 Masked defect hides the other defect, which is not detected at a given point of time. It
means there is an existing defect that is not caused for reproducing another defect.
 Masked defect hides other defects in the system.
 E.g. There is a link to add employee in the system. On clicking this link you can also add
a task for the employee. Let’s assume, both the functionalities have bugs. However, the
first bug (Add an employee) goes unnoticed. Because of this the bug in the add task is
masked.
 E.g. Failing to test a subsystem, might also cause not testing other parts of it which might
have defects but remain unidentified as the subsystem was not tested due to its own
defects.

21. What is the difference between use case, test case, test plan?

Use case Test plan Test case


Desighned before the project Test Plan is a document which Doc which contains the test
started. gives you the background data. It is a step-by-step
information of software being execution of features to verify
tested,providing directions for its expected results.
testing.
Doc provides the user It is prepared by the Test
interaction with the system It is a systematic approach to Engineer based on the
testing a system such as a usecases from FRS to check
machine or software. the functionality of an
Or A test plan is a document application thoroughly.
that provides and records
important information about a
test project.

Usecases are prepared in the


This is prepared by the Test This is a flow or steps for
Functional Requirement
Team lead using Usecase testing a particular application
Specification (FRS).These are
document as input. This so as to find defects. There are
prepared by BA from the should undergo a review different types of testing that
customer requirements. process by the Development should be in place for different
Lead before sending it to the phases of SDLC.
client for approval.
This needs the consent and A document describing the A set od data which consists of
Signoff from the client to scope,approches ,resources test condition, input data,
proceed further preparing the and intended testing activities. expected result, actual result,
Design Document, Test Plan remarks etc
and the Test Cases.

23. What is Severity and Priority?

Severity:

Severity is defined as the degree of impact a defect has on the development or operation of a
component application being tested.
Severity can be of following types:
• Critical: The defect that results in the termination of the complete system or one or
more component of the system and causes extensive corruption of the data. The failed
function is unusable and there is no acceptable alternative method to achieve the
required results then the severity will be stated as critical.
• Major: The defect that results in the termination of the complete system or one or
more component of the system and causes extensive corruption of the data. The failed
function is unusable but there exists an acceptable alternative method to achieve the
required results then the severity will be stated as major.
• Moderate: The defect that does not result in the termination, but causes the system to
produce incorrect, incomplete or inconsistent results then the severity will be stated as
moderate.
• Minor: The defect that does not result in the termination and does not damage the
usability of the system and the desired results can be easily obtained by working
around the defects then the severity is stated as minor.
• Cosmetic: The defect that is related to the enhancement of the system where the
changes are related to the look and field of the application then the severity is stated as
cosmetic.

Priority:

Priority is defined as the order in which a defect should be fixed. Higher the priority the
sooner the defect should be resolved.
Priority can be of following types:
• Low: The defect is an irritant which should be repaired, but repair can be deferred
until after more serious defect have been fixed.
• Medium: The defect should be resolved in the normal course of development
activities. It can wait until a new build or version is created.
• High: The defect must be resolved as soon as possible because the defect is affecting
the application or the product severely. The system cannot be used until the repair has
been done.
24. What are different types of verifications and validation methods?

A main point is to distinguish between validation and verification :

 Verification is the check of the product against the specification ("Am I building the
product right?")
 Validation is the check of the specification against the user's needs ("Am I building the
right product?")

Four types of verification: Recheck


The four types are:

 Inspection (reviews)
 Analysis (mathematical verification)
 Testing (white-box testing)
 Demonstration (black box testing)

Four types of validation:

Validation testing in the V model has the four activities:

 Unit Testing, validating the program


 Integration Testing, validating if the units work together
 System Testing, validating the system / architecture
 User Acceptance Testing, validating against requirements

25. Website testing techniques ?

Web Testing in simple terms is checking your web application for potential bugs before its made
live or before code is moved into the production environment.

During this stage issues such as that of web application security, the functioning of the site, its
access to handicapped as well as regular users and its ability to handle traffic is checked.

Web application testing checklist :


1. Functionality Testing:

This is used to check if your product is as per the specifications you intended for it as well as the
functional requirements you charted out for it in your developmental documentation. Testing
Activities Included:

Test all links in your webpages are working correctly and make sure there are no broken links.
Links to be checked will include -

 Outgoing links

 Internal links

 Anchor Links

 MailTo Links

Tools that can be used: QTP , IBM Rational , Selenium

2. Usability testing:

Usability testing has now become a vital part of any web based project. It can be carried out by
testers like you or a small focus group similar to the target audience of the web application.

Test the site Navigation:

 Menus, buttons or Links to different pages on your site should be easily visible and
consistent on all webpages

 Content should be legible with no spelling or grammatical errors.

 Images if present should contain an "alt" text

Tools that can be used: Chalkmark, Clicktale, Clixpy and Feedback Army

3. Interface Testing:

Three areas to be tested here are - Application, Web and Database Server

 Application: Test requests are sent correctly to the Database and output at the client side
is displayed correctly. Errors if any must be caught by the application and must be only
shown to the administrator and not the end user.
 Web Server: Test Web server is handling all application requests without any service
denial.

 Database Server: Make sure queries sent to the database give expected results.

Test system response when connection between the three layers (Application, Web and
Database) cannot be established and appropriate message is shown to the end user.

Tools that can be used: AlertFox, Ranorex

4. Database Testing:

Database is one critical component of your web application and stress must be laid to test it
thoroughly. Testing activities will include-

 Test if any errors are shown while executing queries

 Data Integrity is maintained while creating, updating or deleting data in database.

 Check response time of queries and fine tune them if necessary.

 Test data retrieved from your database is shown accurately in your web application

Tools that can be used: QTP, Selenium

5. Compatibility testing.
Compatibility tests ensures that your web application displays correctly across different devices.
This would include-

Browser Compatibility Test: Same website in different browsers will display differently. You
need to test if your web application is being displayed correctly across browsers, JavaScript,
AJAX and authentication is working fine. You may also check for Mobile Browser
Compatibility.

The rendering of web elements like buttons, text fields etc. changes with change in Operating
System. Make sure your website works fine for various combination of Operating systems such
as Windows, Linux, Mac and Browsers such as Firefox, Internet Explorer, Safari etc.

Tools that can be used: NetMechanic

6. Performance Testing:
This will ensure your site works under all loads. Testing activities will include but not limited to
-

 Website application response times at different connection speeds

 Load test your web application to determine its behavior under normal and peak loads

 Stress test your web site to determine its break point when pushed to beyond normal
loads at peak time.

 Test if a crash occurs due to peak load, how does the site recover from such an event

 Make sure optimization techniques like gzip compression, browser and server side cache
enabled to reduce load times

Tools that can be used: Loadrunner, JMeter

7. Security testing:

Security testing is vital for e-commerce website that store sensitive customer information like
credit cards. Testing Activities will include-

 Test unauthorized access to secure pages should not be permitted

 Restricted files should not be downloadable without appropriate access

 Check sessions are automatically killed after prolonged user inactivity

 On use of SSL certificates, website should re-direct to encrypted SSL pages.

Tools that can be used: Babel Enterprise, BFBTester and CROSS

28. Principles of software testing ?

1) Testing shows presence of defects: Testing can show the defects are present, but cannot
prove that there are no defects. Even after testing the application or product thoroughly we
cannot say that the product is 100% defect free. Testing always reduces the number of
undiscovered defects remaining in the software but even if no defects are found, it is not a proof
of correctness.

2) Exhaustive testing is impossible: Testing everything including all combinations of inputs and
preconditions is not possible. So, instead of doing the exhaustive testing we can use
risks and priorities to focus testing efforts. For example: In an application in one screen there are
15 input fields, each having 5 possible values, then to test all the valid combinations you would
need 30 517 578 125 (515) tests. This is very unlikely that the project timescales would allow
for this number of tests. So, accessing and managing risk is one of the most important activities
and reason for testing in any project.

3) Early testing: In the software development life cycle testing activities should start as early as
possible and should be focused on defined objectives.

4) Defect clustering: A small number of modules contains most of the defects discovered
during pre-release testing or shows the most operational failures.

5) Pesticide paradox: If the same kinds of tests are repeated again and again, eventually the
same set of test cases will no longer be able to find any new bugs. To overcome this “Pesticide
Paradox”, it is really very important to review the test cases regularly and new and different tests
need to be written to exercise different parts of the software or system to potentially find more
defects.

6) Testing is context dependent: Testing is basically context dependent. Different kinds of sites
are tested differently. For example, safety – critical software is tested differently from an e-
commerce site.

7) Absence of errors fallacy: If the system built is unusable and does not fulfil the user’s needs
and expectations then finding and fixing defects does not help.

29. sanity, smoke, Adhoc, exploratory, accessibility testing ?

Smoke Testing
Smoke Testing is a kind of Software Testing performed after software build to ascertain that the
critical
functionalities of the program is working fine. It is executed "before" any detailed functional or
regression tests are executed on the software build. The purpose is to reject a badly broken
application, so that the QA team does not waste time installing and testing the software
application.
In Smoke Testing, the test cases chosen cover the most important functionality or component of
the system. The objective is not to perform exhaustive testing, but to verify that the critical
functionalities of the system is working fine. For Example a typical smoke test would be - Verify
that the application launches successfully, Check that the GUI is responsive etc.

Sanity Testing
Sanity testing is a kind of Software Testing performed after receiving a software build, with
minor changes in code, or functionality, to ascertain that the bugs have been fixed and no further
issues are introduced due to these changes. The goal is to determine that the proposed
functionality works roughly as expected. If sanity test fails, the build is rejected to save the time
and costs involved in a more rigorous testing.
Adhoc testing
Ad-hoc testing is carried out without following any formal process like requirement
documents, test plan, test cases, etc. Similarly while executing the ad-hoc testing there is NO
formal process of testing which can be documented. Ad-hoc testing is usually done to discover
the issues or defects which cannot be found by following the formal process. The testers who
perform this testing should have a very good and in-depth knowledge of the product or
application. When testers execute ad-hoc testing they only intend to break the system without
following any process or without having any particular use case in mind.

Exploratory testing

 As its name implies, exploratory testing is about exploring, finding out about the
software, what it does, what it doesn’t do, what works and what doesn’t work. The tester
is constantly making decisions about what to test next and where to spend the (limited)
time. This is an approach that is most useful when there are no or poor specifications and
when time is severely limited.

 Exploratory testing is a hands-on approach in which testers are involved in minimum


planning and maximum test execution.

 The planning involves the creation of a test charter, a short declaration of the scope of a
short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be
used.

 The test design and test execution activities are performed in parallel typically without
formally documenting the test conditions, test cases or test scripts. This does not mean
that other, more formal testing techniques will not be used. For example, the tester may
decide to us boundary value analysis but will think through and test the most important
boundary values without necessarily writing them down. Some notes will be written
during the exploratory-testing session, so that a report can be produced afterwards.

Accessibility testing

Accessibility Testing is a subset of usability testing, and it is performed to ensure that the
application being tested is usable by people with disabilities like hearing, color blindness, old age
and other disadvantaged groups. People with disabilities use assistive technology which helps
them in operating a software product.

 Speech RecognitionSoftware - It will convert the spoken word to text , which serves as
input to the computer.

 Screen reader software - Used to read out the text that is displayed on the screen
 Screen Magnification Software- Used to enlarge the monitor and make reading easy for
vision-impaired users.

 Special keyboard- made for the users for easy typing who have motor control
difficulties

30. Test log , test basis, bug leakage, Pilot testing, test strategy ?

Testlog: is the document which contains the all information about the test results. The document
which contains the expected results. Test log is nothing but the addition of 2 fields namely
'Actual result' and 'Pass/Fail Criteria' to the test case i.e., already populated with test case id, test
description, test steps, expected result.

 Test basis : Test analysis is the process of looking at something that can be used to derive
test information. This basis for the tests is called the test basis.

 The test basis is the information we need in order to start the test analysis and create our
own test cases. Basically it’s a documentation on which test cases are based, such as
requirements, design specifications, product risk analysis, architecture and interfaces.

 We can use the test basis documents to understand what the system should do once built.
The test basis includes whatever the tests are based on. Sometimes tests can be based on
experienced user’s knowledge of the system which may not be documented.

 From testing perspective we look at the test basis in order to see what could be tested.
These are the test conditions. A test condition is simply something that we could test.

The test conditions that are chosen will depend on the test strategy or detailed test approach.
For example, they might be based on risk, models of the system, etc.

Test strategy: The choice of test approaches or test strategy is one of the most powerful
factor in the success of the test effort and the accuracy of the test plans and estimates. This
factor is under the control of the testers and test leaders.

Major types of test strategies:

 Analytical: The risk-based strategy involves performing a risk analysis using project
documents and stakeholder input, then planning, estimating, designing, and
prioritizing the tests based on risk. Another analytical test strategy is the
requirements-based strategy, where an analysis of the requirements specification
forms the basis for planning, estimating and designing tests.
 Model-based: You can build mathematical models for loading and response for e-
commerce servers, and test based on that model. If the behavior of the system under
test conforms to that predicted by the model, the system is deemed to be working.
 Methodical : Methodical test strategies have in common the adherence to a pre-
planned, systematized approach that has been developed in-house, assembled from
various concepts developed inhouse and gathered from outside, or adapted
significantly from outside ideas and may have an early or late point of involvement
for testing.
 Process – or standard-compliant: Process- or standard-compliant strategies have in
common reliance upon an externally developed approach to testing, often with little ,
if any customization and may have an early or late point of involvement for testing.
 Dynamic: Dynamic strategies, such as exploratory testing, have in common
concentrating on finding as many defects as possible during test execution and
adapting to the realities of the system under test as it is when delivered, and they
typically emphasize the later stages of testing.
 Consultative or directed: Consultative or directed strategies have in common the
reliance on a group of non-testers to guide or perform the testing effort and typically
emphasize the later stages of testing simply due to the lack of recognition of the value
of early testing.
 Regression-averse: A regression-averse strategy may involve automating functional
tests prior to release of the function, in which case it requires early testing, but
sometimes the testing is almost entirely focused on testing functions that already have
been released, which is in some sense a form of post release test involvement.

Bug Leakage :

The bugs un discovered in the previous stage / cycles is called bug leakage for that stage/cycle.
Eg. suppose you have completed System Testing(ST) and certified the application as fully tested
and send it for UAT . but UAT certainly uncovers some of the bugs which are not found at ST
stage. so bugs are leaked from ST stage to UAT. this is called bug leakage.

Pilot testing :

Pilot Testing is verifying a component of the system or the entire system under a real-time
operating conditions. It verifies the major functionality of the system before going into
production. This testing is done exactly between the UAT and Production. In Pilot testing, a
selected group of end users try the system under test and provide the feedback before the full
deployment of the system. In other words, it is nothing more than a dry run or a dress rehearsal
for the usability test that follows. Pilot Testing helps in early detection of bugs in the System.

Pilot testing is concerned with installing a system on customer site (or a user simulated
environment) for testing against continuous and regular use. Most common method of testing is
to continuously test the system to find out its weak areas. These weaknesses are then sent back to
the development team as bug reports, and these bugs are fixed in the next build of the system.
During this process sometimes acceptance testing is also included as part of compatibility
testing. This occurs when a system is being developed to replace an old one. Pilot testing will
answer the question like, whether the product or service have a potential market.

31. Explain requirement traceability RTM and its importance?

Requirement Traceability Matrix or RTM captures all requirements proposed by the client or
development team and their traceability in a single document delivered at the conclusion of the
life-cycle.

In other words, it is a document that maps and traces user requirement with test cases. The main
purpose of Requirement Traceability Matrix is to see that all test cases are covered so that no
functionality should miss while testing.

Requirement Traceability Matrix – Parameters include

 Requirement ID

 Risks

 Requirement Type and Description

 Trace to design specification

 Unit test cases

 Integration test cases

 System test cases

 User acceptance test cases

 Trace to test script

Types of Traceability Test Matrix

 Forward traceability: This matrix is used to check whether the project progresses in the
desired direction and for the right product. Backward or reverse traceability: It is used
to ensure whether the current product remains on the right track. The purpose behind this
type of traceability is to verify that we are not expanding the scope of the project by
adding code, design elements, test or other work that is not specified in the requirements.
It maps test cases to requirements.
 Bi-directional traceability ( Forward+Backward): This traceability metrics ensures
that all requirements are covered by test cases. It analyzes the impact of a change in
requirements affected by the defect in a work product and vice versa.

32. Explain usability testing?

In usability testing basically the testers tests the ease with which the user interfaces can be used.
It tests that whether the application or the product built is user-friendly or not.

 Usability Testing is a black box testing technique.

 Usability testing also reveals whether users feel comfortable with your application or
Web site according to different parameters – the flow, navigation and layout, speed and
content – especially in comparison to prior or similar applications.

Usability Testing tests the following features of the software:-

 How easy it is to use the software.

 How easy it is to learn the software.

 How convenient is the software to end user.

Usability testing includes the following five components:

1. Learnability: How easy is it for users to accomplish basic tasks the first time they
encounter the design?

2. Efficiency: How fast can experienced users accomplish tasks?

3. Memorability: When users return to the design after a period of not using it, does the
user remember enough to use it effectively the next time, or does the user have to start
over again learning everything?

4. Errors: How many errors do users make, how severe are these errors and how easily can
they recover from the errors?

5. Satisfaction: How much does the user like using the system?

Benefits of usability testing to the end user or the customer:-


 Better quality software.Software is easier to use.
 Software is more readily accepted by users.
 Shortens the learning curve for new users

33. version & variant, release ?

Version and release management:

Invent identification scheme for system version. Plan when new system version is to be
produced. Ensure that version management procedures and tools are properly applied.
Plan and distribute new system releases.

Versions/variants/releases:
 Version: An instance of a system, which is functionally distinct in some way from other
system instances.
 Variant: An instance of a system, which is functionally identical but nonfunctionally
distinct from other instances of a system.
 Release: An instance of a system, which is distributed to users outside of the
development team.

Release management:
Releases must incorporate changes forced on the system by errors discovered by users and by
hardware changes. They must also incorporate new system functionality. Release planning is
concerned with when to issue a system version as a release. System releases not just a set or
executable programs. May also include configuration files defining how the release is configured
for a particular installation, Data files needed for system operation, an installation program or
shell script to install the system on target hardware, Electronic and paper documentation and
Packaging and associated publicity. Systems are not normally released on CD-ROM or as
downloadable installation files from the web.

34. Models in SDLC: - Waterfall, Agile?

Waterfall model –

The Waterfall Model was first Process Model to be introduced. It is also referred to as a linear-
sequential life cycle model.
Advantages of waterfall model:

 This model is simple and easy to understand and use.

 It is easy to manage due to the rigidity of the model – each phase has specific
deliverables and a review process.

 In this model phases are processed and completed one at a time. Phases do not overlap.

 Waterfall model works well for smaller projects where requirements are very well
understood.

Disadvantages of waterfall model:

 Once an application is in the testing stage, it is very difficult to go back and change
something that was not well-thought out in the concept stage.

 No working software is produced until late during the life cycle.

 High amounts of risk and uncertainty.

 Not a good model for complex and object-oriented projects.

 Poor model for long and ongoing projects.

 Not suitable for the projects where requirements are at a moderate to high risk of
changing.

When to use the waterfall model:


 This model is used only when the requirements are very well known, clear and fixed.

 Product definition is stable.

 Technology is understood.

 There are no ambiguous requirements

 Ample resources with required expertise are available freely

 The project is short.

Agile Model –

Agile model is also a type of Incremental model. Software is developed in incremental, rapid
cycles. This results in small incremental releases with each release building on previous
functionality. Each release is thoroughly tested to ensure software quality is maintained. It is
used for time critical applications. Extreme Programming (XP) is currently one of the most
well known agile development life cycle model.

Advantages of Agile model:

 Customer satisfaction by rapid, continuous delivery of useful software.

 People and interactions are emphasized rather than process and tools. Customers,
developers and testers constantly interact with each other.

 Working software is delivered frequently (weeks rather than months).


 Face-to-face conversation is the best form of communication.

 Close, daily cooperation between business people and developers.

 Continuous attention to technical excellence and good design.

 Regular adaptation to changing circumstances.

 Even late changes in requirements are welcomed

Disadvantages of Agile model:

 In case of some software deliverables, especially the large ones, it is difficult to assess the
effort required at the beginning of the software development life cycle.

 There is lack of emphasis on necessary designing and documentation.

 The project can easily get taken off track if the customer representative is not clear what
final outcome that they want.

 Only senior programmers are capable of taking the kind of decisions required during the
development process. Hence it has no place for newbie programmers, unless combined
with experienced resources.

When to use agile testing :


 When new changes are needed to be implemented.

 To implement a new feature the developers need to lose only the work of a few days, or
even only hours, to roll back and implement it.

 Unlike the waterfall model in agile model very limited planning is required to get
started with the project. Agile assumes that the end users’ needs are ever changing in a
dynamic business and IT world.

 Both system developers and stakeholders alike, find they also get more freedom of time
and options than if the software was developed in a more rigid sequential way.

27. On which basis we give priority and severity for a bug and give one example for high
priority and low severity and high severity and low priority?

 Priority will be given as per the client requirements.


 Severity will be given as per the degree of the impact of failing a program.

Examples for different scenarios:


 High Priority & High Severity:
1. All show stopper bugs would be added under this category, means bug due to which tester is
not able to continue with the Software Testing, Blocker Bugs.
2. Let’s take an example of High Priority & High Severity, upon login to system “Run time
error” displayed on the page, so due to which tester is not able to proceed the testing further.

 High Priority & Low Severity:


On the home page of the company’s web site spelling mistake in the name of the company is
surely a High Priority issue. In terms of functionality it is not breaking anything so we can mark
as Low Severity, but making bad impact on the reputation of company site. So it highest priority
to fix this.

 Low Priority & High Severity:


The download Quarterly statement is not generating correctly from the website & user is already
entered in quarter in last month. So we can say such bugs as High Severity, this is bugs occurring
while generating quarterly report. We have time to fix the bug as report is generated at the end of
the quarter so priority to fix the bug is Low.

 Low Priority & Low Severity:


Spelling mistake in the confirmation error message like “You have registered success” instead of
successfully, success is written.

35. What is the responsibility of a tester when a bug which may arrive at the time of testing.
Explain?

First check the status of the bug, then check whether the bug is valid or not. then forward the
same bug to the Team Leader and then after confirmation forward it to the concern developer.
Also perform retesting when the bug gets fixed.

36. In an application currently in production, one module of code is being modified. Is it


necessary to re-test the whole application or is it enough to just test functionality associated
with that module?

If one module of code is being modified in an application, I think only modules associated with
the module being modified should be retested. Regression testing could be very important in
large applications or projects. In view of this, i would say regression test should be carried out on
only those modules associated with the modified module.
If the functionality change is major, then there should a thorough check in all the modules. else,
we need to test the changes module intensively and we need to run sanity testing on the
remaining application if required.
37. How to overcome the challenge of not having input documentation for testing?

If SRS or BRD is not available QAs can talk to the developers or business analyst to
 Get clarify things.
 Get a confirmation
 Clear the doubts

If there are any references like

 Screen shots
 Previous version of the application
 Wireframes

We can use them as testing references.


Smoke testing is another best option. We identify the functionality and basic and major bugs
from that. Exploratory testing is also a good option. We can find what software does, and does
not. The testers have make decisions on what to test next and where to spend more time. If none
of them work we can just test the application from our previous experience.

38. Should testing be done only after the build and execution phases are complete?

No, It is not necessary that testing should be done only after build and execution, In most of the
life cycle testing begins from design phase. As soon as possible
Testing should start as soon as possible depending upon the SDLC model.
39. What group of teams can do software testing?

When it comes to testing everyone in the world can be involved right the developer to the project
manager to the customer. But below are different types of team groups which can be present in a
project.

Anyone in the project team can do software testing.

Developers- Unit Testing;

Software Testers- Integration/Regression Testing;

Business Analysts- System/Validation Testing;

End-User/Customer- User Acceptance Testing.


Outsourcing testers
41. What is Negative testing?
Negative testing

Negative testing, commonly referred to as error path testing or failure testing is generally done
to ensure the stability of the application.

Negative testing is the process of applying as much creativity as possible and validating the
application against invalid data. This means its intended purpose is to check if the errors are
being shown to the user where it’s supposed to, or handling a bad value more gracefully.

Negative testing is necessary.

The application or software’s functional reliability can be quantified only with effectively
designed negative scenarios. Negative testing not only aims to bring out any potential flaws that
could cause serious impact on the consumption of the product on the whole, but can be
instrumental in determining the conditions under which the application can crash. Finally, it
ensures that there is sufficient error validation present in the software.

Example:

Say for example you need to write negative test cases about a pen. The basic motive of the pen is
to be able to write on paper.

Some examples of negative testing could be:

 Change the medium that it is supposed to write on, from paper to cloth or a brick and see
if it should still write.
 Put the pen in the liquid and verify if it writes again.
 Replace the refill of the pen with an empty one and check that it should stop writing.

42. What is a "Good Tester"?

Software testers are the backbone of all organizations because they are the ones who are
responsible for ensuring the quality of the project or product. But how to spot the ‘best of the
best’ among testers? Here are 21 qualities and characteristic that are often seen in great testers:
1. Creative mind
2. Analytical skills
3. Curiosity
4. Good listener
5. Proactively passionate
6. Quick learner
7. Domain knowledge
8. Client oriented
9. Test Automation And Technical Knowledge
10. Ability to organize and prioritize
11. Ability to report
12. Business oriented
13. Intellectual ability
14. Good observer
15. Good time manager
16. Perseverance
17. Ability To Identify And Manage Risks
18. Quality oriented
19. Ability to work in team
20. Attention to detail
21. Ability to communicate

43. If you have ‘n’ requirements and you have less time how do you prioritize the
requirements?

We should check the most critical or important functionalties which effect the
system first. And the based on priority we can check
.

44. What all types of testing you could perform on a web based application?

The various types testing we can perform on web based applications :

1. Functional testing
2. Usability testing
3. Interface testing
4. Database testing
5. Compatibility testing
6. Performance testing
7. Security testing

45. Why Software Testing as a career? Prepare your own answers


Its actually both development and testing are good jobs. Testing is vast and helps us
to think a situation from different point of view and find out the best among them to
mark the product unique in the market. I.e I love to be in a team where a quality
product can be produced as like none of us will be happy if our product doesn't work
as intended . So iam happy to be in such a team where customer satisfaction is
considered as more important.