Sie sind auf Seite 1von 23

What is a Test Case?

A test case is a set of conditions or variables and inputs that are developed for a particular
goal or objective to be achieved on a certain application to judge its capabilities or
features.
It might take more than one test case to determine the true functionality of the application
being tested. Every requirement or objective to be achieved needs at least one test case.
Some software development methodologies like Rational Unified Process (RUP)
recommend creating at least two test cases for each requirement or objective; one for
performing testing through positive perspective and the other through negative
perspective.

Test Case Structure

A formal written test case comprises of three parts -

1. Information
Information consists of general information about the test case. Information
incorporates Identifier, test case creator, test case version, name of the test case,
purpose or brief description and test case dependencies.
2. Activity
Activity consists of the actual test case activities. Activity contains information
about the test case environment, activities to be done at test case initialization,
activities to be done after test case is performed, step by step actions to be done
while testing and the input data that is to be supplied for testing.
3. Results
Results are outcomes of a performed test case. Results data consist of information
about expected results and the actual results.

Designing Test Cases

Test cases should be designed and written by someone who understands the function or
technology being tested. A test case should include the following information -

• Purpose of the test


• Software requirements and Hardware requirements (if any)
• Specific setup or configuration requirements
• Description on how to perform the test(s)
• Expected results or success criteria for the test

Designing test cases can be time consuming in a testing schedule, but they are worth
giving time because they can really avoid unnecessary retesting or debugging or at least
lower it. Organizations can take the test cases approach in their own context and
according to their own perspectives. Some follow a general step way approach while
others may opt for a more detailed and complex approach. It is very important for you to
decide between the two extremes and judge on what would work the best for you.
Designing proper test cases is very vital for your software testing plans as a lot of bugs,
ambiguities, inconsistencies and slip ups can be recovered in time as also it helps in
saving your time on continuous debugging and re-testing test cases.

Life Cycle of a Bug

Given below are the stages of a bug life span. Test reports describe in detail the behavior
of bug at each stage.

New
This is the first stage of bug life cycle in which the tester reports a bug. The presence of
the bug becomes evident when the tester tries to run the newly developed application and
it does not respond in an expected manner. This bug is then send to the testing lead for
approval.

Open
When the bug is reported to the testing lead, he examines the bug by retesting the
product. If he finds that the bug is genuine, he approves it and changes its status to 'open'.

Assign
Once the bug has been approved and found genuine by the testing lead, it is then send to
the concerned software development team for its resolution. It can be assigned to the
team who created the software or it may be assigned to some specialized team. After
assigning the bug to the software team, the status of the bug is changed to 'assign'.

Test
The team to which the bug has been assigned works on the removal of bug. Once, they
are finished with fixing the bug, it is sent back to the testing team for a retest. However,
before sending the bug back to the testing team, its status is changed to 'test' in the report.

Deferred
If the development team changes the status of the bug to 'deferred', it means that the bug
will be fixed in the next releases of the software. There can be myriad reason why the
software team may not consider fixing the bug urgently. This includes lack of time, low
impact of the bug or negligible potential of the bug to induce major changes in the normal
functioning of the software.

Rejected
Although, the testing lead might have approved the bug stating it as a genuine one, the
software development team may not always agree. Ultimately, it is the prerogative of the
development team to decide if the bug is really genuine or not. If they doubt the presence
or impact of the bug, then they may change its status to 'rejected'.
Duplicate
If the development team finds that the same bug has been repeated twice or there are two
bugs which point to the same concept, then the status of one bug is changed to 'duplicate'.
In this case, fixing one bug automatically takes care of the other bug.

Verified
If the software development team sends the fixed bug back for retesting, then the bug
undergoes rigorous testing procedure again. If at the end of the test, it is not found then
its status is changed to 'verified.'

Reopened
If the bug still exists, then its status is changed to 'reopened'. The bug then traverses the
entire of its life cycle once again.

Closed
If no occurrence of bug is reported and if the software functions normally, then the bug is
'closed.' This is the final stage in which the bug has been fixed, tested and approved.

Software testing life cycle consists of the various stages of testing through which a
software product goes and describes the various activities pertaining to testing that are
carried out on the product. Here's an explanation of the STLC along with a software
testing life cycle flow chart. It won't be wrong to call this article, a software testing life
cycle tutorial.

Introduction to Software Testing Life Cycle

Every organization has to undertakes testing of each of its products. However, the way it
is conducted differs from one organization to another. This refers to the life cycle of the
testing process. It is advisable to carry out the testing process from the initial phases, with
regard to the Software Development Life Cycle or SDLC to avoid any complications.

Software Testing Life Cycle Phases

Software testing has its own life cycle that meets every stage of the SDLC. The software
testing life cycle diagram can help one visualize the various software testing life cycle
phases. They are

1. Requirement Stage
2. Test Planning
3. Test Analysis
4. Test Design
5. Test Verification and Construction
6. Test Execution
7. Result Analysis
8. Bug Tracking
9. Reporting and Rework
10. Final Testing and Implementation
11. Post Implementation

Requirement Stage
This is the initial stage of the life cycle process in which the developers take part in
analyzing the requirements for designing a product. Testers can also involve themselves
as they can think from the users' point of view which the developers may not. Thus a
panel of developers, testers and users can be formed. Formal meetings of the panel can be
held in order to document the requirements discussed which can be further used as
software requirements specifications or SRS.

Test Planning
Test planning is predetermining a plan well in advance to reduce further risks. Without a
good plan, no work can lead to success be it software-related or routine work. A test plan
document plays an important role in achieving a process-oriented approach. Once the
requirements of the project are confirmed, a test plan is documented. The test plan
structure is as follows:

1. Introduction: This describes the objective of the test plan.


2. Test Items The items that are referred to prepare this document will be listed here
such as SRS, project plan.
3. Features to be tested: This describes the coverage area of the test plan, ie. the list
of features that are to be tested that are based on the implicit and explicit
requirements from the customer.
4. Features not to be tested: The incorporated or comprised features that can be
skipped from the testing phase are listed here. Features that are out of scope of
testing, like incomplete modules or those on low severity eg. GUI features that
don't hamper the further process can be included in the list.
5. Approach: This is the test strategy that should be appropriate to the level of the
plan. It should be in acceptance with the higher and lower levels of the plan.
6. Item pass/fail criteria: Related to the show stopper issue. The criterion which is
used has to explain which test item has passed or failed.
7. Suspension criteria and resumption requirements: The suspension criterion
specifies the criterion that is to be used to suspend all or a portion of the testing
activities, whereas resumption criterion specifies when testing can resume with
the suspended portion.
8. Test deliverable: This includes a list of documents, reports, charts that are
required to be presented to the stakeholders on a regular basis during testing and
when testing is completed.
9. Testing tasks: This stage is needed to avoid confusion whether the defects should
be reported for future function. This also helps users and testers to avoid
incomplete functions and prevent waste of resources.
10. Environmental needs: The special requirements of that test plan depending on the
environment in which that application has to be designed are listed here.
11. Responsibilities: This phase assigns responsibilities to the person who can be held
responsible in case of a risk.
12. Staffing and training needs: Training on the application/system and training on
the testing tools to be used needs to be given to the staff members who are
responsible for the application.
13. Risks and contingencies:This emphasizes on the probable risks and various events
that can occur and what can be done in such situation.
14. Approval: This decides who can approve the process as complete and allow the
project to proceed to the next level that depends on the level of the plan.

Test Analysis
Once the test plan documentation is done, the next stage is to analyze what types of
software testing should be carried out at the various stages of SDLC.
Test Design
Test design is done based on the requirements of the project documented in the SRS. This
phase decides whether manual or automated testing is to be done. In automation testing,
different paths for testing are to be identified first and writing of scripts has to be done if
required. There originates a need for an end to end checklist that covers all the features of
the project.

Test Verification and Construction


In this phase test plans, the test design and automated script tests are completed. Stress
and performance testing plans are also completed at this stage. When the development
team is done with a unit of code, the testing team is required to help them in testing that
unit and reporting of the bug if found. Integration testing and bug reporting is done in this
phase of the software testing life cycle.

Test Execution
Planning and execution of various test cases is done in this phase. Once the unit testing is
completed, the functionality of the tests is done in this phase. At first, top level testing is
done to find out top level failures and bugs are reported immediately to the development
team to get the required workaround. Test reports have to be documented properly and
the bugs have to be reported to the development team.

Result Analysis
Once the bug is fixed by the development team, i.e after the successful execution of the
test case, the testing team has to retest it to compare the expected values with the actual
values, and declare the result as pass/fail.

Bug Tracking
This is one of the important stages as the Defect Profile Document (DPD) has to be
updated for letting the developers know about the defect. Defect Profile Document
contains the following

1. Defect Id: Unique identification of the Defect.


2. Test Case Id: Test case identification for that defect.
3. Description: Detailed description of the bug.
4. Summary: This field contains some keyword information about the bug, which
can help in minimizing the number of records to be searched.
5. Defect Submitted By: Name of the tester who detected/reported the bug.
6. Date of Submission: Date at which the bug was detected and reported.
7. Build No.: Number of test runs required.
8. Version No.: The version information of the software application in which the bug
was detected and fixed.
9. Assigned To: Name of the developer who is supposed to fix the bug.
10. Severity: Degree of severity of the defect.
11. Priority: Priority of fixing the bug.
12. Status: This field displays current status of the bug.
The contents of a bug well explain all the above mentioned things.

Reporting and Rework


Testing is an iterative process. The bug once reported and as the development team fixes
the bug, it has to undergo the testing process again to assure that the bug found is
resolved. Regression testing has to be done. Once the Quality Analyst assures that the
product is ready, the software is released for production. Before release, the software has
to undergo one more round of top level testing. Thus testing is an ongoing process.

Final Testing and Implementation


This phase focuses on the remaining levels of testing, such as acceptance, load, stress,
performance and recovery testing. The application needs to be verified under specified
conditions with respect to the SRS. Various documents are updated and different matrices
for testing are completed at this stage of the software testing life cycle.

Post Implementation
Once the tests are evaluated, the recording of errors that occurred during various levels of
the software testing life cycle, is done. Creating plans for improvement and enhancement
is an ongoing process. This helps to prevent similar problems from occuring in the future
projects. In short, planning for improvement of the testing process for future applications
is done in this phase.

• What is regression testing?


Regression testing is the testing of a particular component of the software or the entire
software after modifications have been made to it. The aim of regression testing is to
ensure new defects have not been introduced in the component or software, especially in
the areas where no changes have been made. In short, regression testing is the testing to
ensure nothing has changed, which should not have changed due to changes made.

• What is a Review?
A review is an evaluation of a said product or project status to ascertain any discrepancies
from the actual planned results and to recommend improvements to the said product. The
common examples of reviews are informal review or peer review, technical review,
inspection, walkthrough, management review.
Manual testing is one of the oldest and most effective ways in which one can carry out
software testing. Whenever a new software is invented, software testing needs to be done
to test for its effectiveness and it is for this purpose that manual testing is required.
Manual testing is one of the types of software testing which is an important component of
the IT job sector and does not use any automation methods and is therefore tedious and
laborious.

Manual testing requires a tester who needs to have certain qualities because the job
demands it - he needs to be observant, creative, innovative, speculative, open-minded,
resourceful, patient, skillful and possess certain other qualities that will help him with his
job. In the following article we shall not be concentrating on what a tester is like, but
what some of the manual testing interview questions are. So if you have a doubt in this
regard, read the following article to know what some interview questions on manual
testing are.

Manual Testing Interview Questions for Freshers

The following are some of the interview questions for manual testing. This will give you
a fair idea of what these questions are like.

• What is the accessibility testing?


• What is Ad Hoc Testing?
• What is the Alpha Testing?
• What is Beta Testing?
• What is Component Testing?
• What is Compatibility Testing?
• What is Data Driven Testing?
• What is Concurrency Testing?
• What is Conformance Testing?
• What is Context Driven Testing?
• What is Conversion Testing?
• What is Depth Testing?
• What is Dynamic Testing?
• What is End-to-End testing?
• What is Endurance Testing?
• What is Installation Testing?
• What is Gorilla Testing?
• What is Exhaustive Testing?
• What is Localization Testing?
• What is Loop Testing?
• What is Mutation Testing?
• What is Positive Testing?
• What is Monkey Testing?
• What is Negative Testing?
• What is Path Testing?
• What is Ramp Testing?
• What is Performance Testing?
• What is Recovery Testing?
• What is the Regression testing?
• What is the Re-testing testing?
• What is Stress Testing?
• What is Sanity Testing?
• What is Smoke Testing?
• What’s the Volume Testing?
• What’s the Usability testing?
• What is Scalability Testing?
• What is Soak Testing?
• What’s the User Acceptance testing?

These were some of the manual testing interview questions for freshers, let us now move
on to other forms of manual testing questions.

Software Testing Interview Questions for Freshers

Here are some software testing interview questions that will help you get into the more
intricate and complex formats of this form of manual testing.

• Can you explain the V model in manual testing?


• What is the water fall model in manual testing?
• What is the structure of bug life cycle?
• What is the difference between bug, error and defect?
• How does one add objects into the Object Repository?
• What are the different modes of recording?
• What does 'testing' mean?
• What is the purpose of carrying out manual testing for a background process that
does not have a user interface and how do you go about it?
• Explain with an example what test case and bug report are.
• How does one go about reviewing a test case and what are the types that are
available?
• What is AUT?
• What is compatibility testing?
• What is alpha testing and beta testing?
• What is the V model?
• What is debugging?
• What is the difference between debugging and testing? Explain in detail.
• What is the fish model?
• What is port testing?
• Explain in detail the difference between smoke and sanity testing.
• What is the difference between usability testing and GUI?
• Why does one require object spy in QTP?
• What is the test case life cycle?
• Why does one save .vsb in library files in qtp winrunner?
• When do we do update mode in qtp?
• What is virtual memory?
• What is visual source safe?
• What is the difference between test scenarios and test strategy?
• What is the difference between properties and methods in qtp?
Testing is a process of gathering information by making observations and comparing
them to expectations. – Dale Emery

In our day-to-day life, when we go out, shopping any product such as vegetable, clothes,
pens, etc. we do check it before purchasing them for our satisfaction and to get maximum
benefits. For example, when we intend to buy a pen, we test the pen before actually
purchasing it i.e. if its writing, does it break if it falls, does it work in extreme climatic
conditions, etc. So, though its the software, hardware or any product, testing turns to be
mandatory.

What is Software Testing?


Software Testing is a process of verifying and validating whether the program is
performing correctly with no bugs. It is the process of analyzing or operating software for
the purpose of finding bugs. It also helps to identify the defects / flaws / errors that may
appear in the application code, which needs to be fixed. Testing not only means fixing the
bug in the code, but also to check whether the program is behaving according to the given
specifications and testing strategies. There are various types of software testing strategies
such as white box testing strategy, black box testing strategy, grey box software testing
strategy, etc.

Need of Software Testing Types


Types of Software Testing, depends upon different types of defects. For example:

• Functional testing is done to detect functional defects in a system.


• Performance Testing is performed to detect defects when the system does not
perform according to the specifications
• Usability Testing to detect usability defects in the system.
• Security Testing is done to detect bugs/defects in the security of the system.

The list goes on as we move on towards different layers of testing.

Types of Software Testing


Various software testing methodologies guide you through the consecutive software
testing types. Those who are new to this subject, here is some information on software
testing - how to go about for beginners. To determine the true functionality of the
application being tested, test cases are designed to help the developers. Test cases provide
you with the guidelines for going through the process of testing the software. Software
testing includes two basic types of software testing, viz. Manual Scripted Testing and
Automated Testing.

• Manual Scripted Testing: This is considered to be one of the oldest type of


software testing methods, in which test cases are designed and reviewed by the
team, before executing it.
• Automated Testing: This software testing type applies automation in the testing,
which can be applied to various parts of a software process such as test case
management, executing test cases, defect management, reporting of the
bugs/defects. The bug life cycle helps the tester in deciding how to log a bug and
also guides the developer to decide on the priority of the bug depending upon the
severity of logging it. Software bug testing or software testing to log a bug,
explains the contents of a bug that is to be fixed. This can be done with the help of
various bug tracking tools such as Bugzilla and defect tracking management tools
like the Test Director.

Other Software Testing Types


Software testing life cycle is the process that explains the flow of the tests that are to be
carried on each step of software testing of the product. The V- Model i.e Verification and
Validation Model is a perfect model which is used in the improvement of the software
project. This model contains software development life cycle on one side and software
testing life cycle on the other hand side. Checklists for software tester sets a baseline that
guides him to carry on the day-to-day activities.

Black Box Testing


It explains the process of giving the input to the system and checking the output, without
considering how the system generates the output. It is also called as Behavioral Testing.

Functional Testing: In this type of testing, the software is tested for the functional
requirements. This checks whether the application is behaving according to the
specification.

Performance Testing: This type of testing checks whether the system is performing
properly, according to the user's requirements. Performance testing depends upon the
Load and Stress Testing, that is internally or externally applied to the system.

1. Load Testing : In this type of performance testing, the system is raised beyond the
limits in order to check the performance of the system when higher loads are
applied.
2. Stress Testing : In this type of performance testing, the system is tested beyond
the normal expectations or operational capacity

Usability Testing: This type of testing is also called as 'Testing for User Friendliness'.
This testing checks the ease of use of an application. Read more on introduction to
usability testing.

Regression Testing: Regression testing is one of the most important types of testing, in
which it checks whether a small change in any component of the application does not
affect the unchanged components. Testing is done by re-executing the previous versions
of the application.

Smoke Testing: Smoke testing is used to check the testability of the application. It is also
called 'Build Verification Testing or Link Testing'. That means, it checks whether the
application is ready for further major testing and working, without dealing with the finer
details.
Sanity Testing: Sanity testing checks for the behavior of the system. This type of
software testing is also called as Narrow Regression Testing.

Parallel Testing: Parallel testing is done by comparing results from two different
systems like old vs new or manual vs automated.

Recovery Testing: Recovery testing is very necessary to check how fast the system is
able to recover against any hardware failure, catastrophic problems or any type of system
crash.

Installation Testing: This type of software testing identifies the ways in which
installation procedure leads to incorrect results.

Compatibility Testing: Compatibility Testing determines if an application under


supported configurations perform as expected, with various combinations of hardware
and software packages. Read more on compatibility testing.

Configuration Testing: This testing is done to test for compatibility issues. It determines
minimal and optimal configuration of hardware and software, and determines the effect
of adding or modifying resources such as memory, disk drives and CPU.

Compliance Testing: This type of testing checks whether the system was developed in
accordance with standards, procedures and guidelines.

Error-Handling Testing: This software testing type determines the ability of the system
to properly process erroneous transactions.

Manual-Support Testing: This type of software testing is an interface between people


and application system.

Inter-Systems Testing: This type of software testing method is an interface between two
or more application systems.

Exploratory Testing: Exploratory Testing is a type of software testing, which is similar


to ad-hoc testing, and is performed to explore the software features. Read more on
exploratory testing.

Volume Testing: This testing is done, when huge amount of data is processed through
the application.

Scenario Testing: This type of software testing provides a more realistic and meaningful
combination of functions, rather than artificial combinations that are obtained through
domain or combinatorial test design.

User Interface Testing: This type of testing is performed to check, how user-friendly the
application is. The user should be able to use the application, without any assistance by
the system personnel.

System Testing: System testing is the testing conducted on a complete, integrated


system, to evaluate the system's compliance with the specified requirements. This type of
software testing validates that the system meets its functional and non-functional
requirements and is also intended to test beyond the bounds defined in the
software/hardware requirement specifications.

User Acceptance Testing: Acceptence Testing is performed to verify that the product is
acceptable to the customer and it's fulfilling the specified requirements of that customer.
This testing includes Alpha and Beta testing.

1. Alpha Testing: Alpha testing is performed at the developer's site by the customer
in a closed environment. This testing is done after system testing.
2. Beta Testing: This type of software testing is done at the customer's site by the
customer in the open environment. The presence of the developer, while
performing these tests, is not mandatory. This is considered to be the last step in
the software development life cycle as the product is almost ready.

White Box Testing


It is the process of giving the input to the system and checking, how the system processes
the input, to generate the output. It is mandatory for a tester to have the knowledge of the
source code.

Unit Testing: This type of testing is done at the developer's site to check whether a
particular piece/unit of code is working fine. Unit testing deals with testing the unit as a
whole.

Static and Dynamic Analysis: In static analysis, it is required to go through the code in
order to find out any possible defect in the code. Whereas, in dynamic analysis the code
is executed and analyzed for the output.

Statement Coverage: This type of testing assures that the code is executed in such a way
that every statement of the application is executed at least once.

Decision Coverage: This type of testing helps in making decision by executing the
application, at least once to judge whether it results in true or false.

Condition Coverage: In this type of software testing, each and every condition is
executed by making it true and false, in each of the ways at least once.

Path Coverage: Each and every path within the code is executed at least once to get a
full path coverage, which is one of the important parts of the white box testing.

Integration Testing: Integration testing is performed when various modules are


integrated with each other to form a sub-system or a system. This mostly focuses in the
design and construction of the software architecture. Integration testing is further
classified into Bottom-Up Integration and Top-Down Integration testing.

1. Bottom-Up Integration Testing: In this type of integration testing, the lowest level
components are tested first and then alleviate the testing of higher level
components using 'Drivers'.
2. Top-Down Integration Testing: This is totally opposite to bottom-up approach, as
it tests the top level modules are tested and the branch of the module are tested
step by step using 'Stubs' until the related module comes to an end.

Security Testing: Testing that confirms, how well a system protects itself against
unauthorized internal or external, or willful damage of code, means security testing of the
system. Security testing assures that the program is accessed by the authorized personnel
only. Read more on brief introduction to security testing.

Mutation Testing: In this type of software testing, the application is tested for the code
that was modified after fixing a particular bug/defect.
• Explain in short, sanity testing, adhoc testing and smoke testing.
Sanity testing is a basic test, which is conducted if all the components of the software can
be compiled with each other without any problem. It is to make sure that there are no
conflicting or multiple functions or global variable definitions have been made by
different developers. It can also be carried out by the developers themselves.

Smoke testing on the other hand is a testing methodology used to cover all the major
functionality of the application without getting into the finer nuances of the application. It
is said to be the main functionality oriented test.

Ad hoc testing is different than smoke and sanity testing. This term is used for software
testing, which is performed without any sort of planning and/or documentation. These
tests are intended to run only once. However in case of a defect found it can be carried
out again. It is also said to be a part of exploratory testing.
• What are stubs and drivers in manual testing?
Both stubs and drivers are a part of incremental testing. There are two approaches, which
are used in incremental testing, namely bottom up and top down approach. Drivers are
used in bottom up testing. They are modules, which test the components to be tested. The
look of the drivers is similar to the future real modules.

A skeletal or special purpose implementation of a component, which is used to develop


or test a component, that calls or is otherwise dependent on it. It is the replacement for
the called component.

• Explain priority, severity in software testing.


Priority is the level of business importance, which is assigned to a defect found. On the
other hand, severity is the degree of impact, the defect can have on the development or
operation of the component or the system.
• Explain the waterfall model in testing.
Waterfall model is a part of software development life cycle, as well as software testing.
It is one of the first models to be used for software testing.

• Tell me about V model in manual testing.


V model is a framework, which describes the software development life cycle activities
right from requirements specification up to software maintenance phase. Testing is
integrated in each of the phases of the model. The phases of the model start with user
requirements and are followed by system requirements, global design, detailed design,
implementation and ends with system testing of the entire system. Each phase of model
has the respective testing activity integrated in it and is carried out parallel to the
development activities. The four test levels used by this model include, component
testing, integration testing, system testing and acceptance testing.

• Difference between bug, error and defect.


Bug and defect essentially mean the same. It is the flaw in a component or system, which
can cause the component or system to fail to perform its required function. If a bug or
defect is encountered during the execution phase of the software development, it can
cause the component or the system to fail. On the other hand, an error is a human error,
which gives rise to incorrect result. You may want to know about, how to log a bug
(defect), contents of a bug, bug life cycle, and bug and statuses used during a bug life
cycle, which help you in understanding the terms bug and defect better.

• What is compatibility testing?


Compatibility testing is a part of non-functional tests carried out on the software
component or the entire software to evaluate the compatibility of the application with the
computing environment. It can be with the servers, other software, computer operating
system, different web browsers or the hardware as well.

• What is integration testing?


One of the software testing types, where tests are conducted to test interfaces between
components, interactions of the different parts of the system with operating system, file
system, hardware and between different software. It may be carried out by the integrator
of the system, but should ideally be carried out by a specific integration tester or a test
team.

• Which are the different methodologies used in software testing?


Refer to software testing methodologies for detailed information on the different
methodologies used in software testing.

• Explain performance testing.


It is one of the non-functional type of software testing. Performance of a software is the
degree to which a system or a component of system accomplish the designated functions
within given constraints regarding processing time and throughput rate. Therefore,
performance testing is the process to test to determine the performance of a software.
• Explain the testcase life cycle.
On an average a test case goes through the following phases. The first phase of the
testcase life cycle is identifying the test scenarios either from the specifications or from
the use cases designed to develop the system. Once the scenarios have been identified,
the test cases apt for the scenarios have to be developed. Then the test cases are reviewed
and the approval for those test cases have to be taken from the concerned authority. After
the test cases have been approved, they are executed. When the execution of the test
cases start, the results of the tests have to be recorded. The test cases which pass are
marked accordingly. If the test cases fail, defects have to be raised. When the defects are
fixed the failed test case has to be executed again.

• Explain equivalence class partition.


It is either specification based or a black box technique. Gather information on
equivalence partitioning from the article on equivalence partitioning.

• Explain statement coverage.


It is a structure based or white box technique. Test coverage measures in a specific way
the amount of testing performed by a set of tests. One of the test coverage type is
statement coverage. It is the percentage of executable statements which have been
exercise by a particular test suite. The formula which is used for statement coverage is:

Number of statements
Statement Coverage = exercised
* 100%
Total number of statements

• What is acceptance testing.


Refer to the article on acceptance testing for the answer.

• Explain compatibility testing.


The answer to this question is in the article on compatibility testing.

• What is meant by functional defects and usability defects in general? Give


appropriate example.
We will take the example of ‘Login window’ to understand functionality and usability
defects. A functionality defect is when a user gives a valid user name but invalid
password and the user clicks on login button. If the application accepts the user name and
password, and displays the main window, where an error should have been displayed. On
the other hand a usability defect is when the user gives a valid user name, but invalid
password and clicks on login button. The application throws up an error message saying
"Please enter valid user name" when the error message should have been "Please enter
valid Password."

• What are the check lists, which a software tester should follow?
Read the link on check lists for software tester to find the answer to the question.
• What is usability testing?
Refer to the article titled usability testing for an answer to this question.

• What is exploratory testing?


Read the page on exploratory testing to find the answer.

• What is security testing?


Read on security testing for an appropriate answer.

• Explain white box testing.


One of the testing types used in software testing is white box testing. Read in detail on
white box testing.

• What is the difference between volume testing and load testing?


Volume testing checks if the system can actually cope up with the large amount of data.
For example, a number of fields in a particular record or numerous records in a file, etc.
On the other hand, load testing is measuring the behavior of a component or a system
with increased load. The increase in load can be in terms of number of parallel users
and/or parallel transactions. This helps to determine the amount of load, which can be
handled by the component or the software system.

• What is pilot testing?


It is a test of a component of a software system or the entire system under the real time
operating conditions. The real time environment helps to find the defects in the system
and prevent costly bugs been detected later on. Normally a group of users use the system
before its complete deployment and give their feedback about the system.

• What is exact difference between debugging & testing?


When a test is run and a defect has been identified. It is the duty of the developer to first
locate the defect in the code and then fix it. This process is known as debugging. In other
words, debugging is the process of finding, analyzing and removing the causes of failures
in the software. On the other hand, testing consists of both static and dynamic testing life
cycle activities. It helps to determine that the software does satisfy specified requirements
and it is fit for purpose.

• Explain black box testing.


Find the answer to the question in the article on black box testing.

• What is verification and validation?


Read on the two techniques used in software testing namely verification and validation in
the article on verification and validation.

• Explain validation testing.


For an answer about validation testing, click on the article titled validation testing.

• What is waterfall model in testing?


Refer to the article on waterfall model in testing for the answer.

• Explain beta testing.


For answer to this question, refer to the article on beta testing.

• What is boundary value analysis?


A boundary value is an input or an output value, which resides on the edge of an
equivalence partition. It can also be the smallest incremental distance on either side of an
edge, like the minimum or a maximum value of an edge. Boundary value analysis is a
black box testing technique, where the tests are based on the boundary values.

• What is system testing?


System testing is testing carried out of an integrated system to verify, that the system
meets the specified requirements. It is concerned with the behavior of the whole system,
according to the scope defined. More often than not system testing is the final test carried
out by the development team, in order to verify that the system developed does meet the
specifications and also identify defects which may be present.

• What is the difference between retest and regression testing?


Retesting, also known as confirmation testing is testing which runs the test cases that
failed the last time, when they were run in order to verify the success of corrective
actions taken on the defect found. On the other hand, regression testing is testing of a
previously tested program program after the modifications to make sure that no new
defects have been introduced. In other words, it helps to uncover defects in the
unchanged areas of the software.

• What is a test suite?


A test suite is a set of several test cases designed for a component of a software or system
under test, where the post condition of one test case is normally used as the precondition
for the next test.

Web Browsers
Web browser is a software which simplifies the data for a layman on the Internet. Mozilla
Firefox, Internet Explorer, Google Chrome and Apple Safari are the popular web
browsers.

Computer Operating Systems


Computers are run by operating systems (OS), which are computer programs organizing
and utilizing the computer's processing functions. An operating system lines up the jobs,
and assigns them to their proper places. It tells the CPU to perform various functions and
checks the data moving through the various devices of the computer. There are many
operating systems for a computer user to choose from, the most popular include
Microsoft's Windows, the UNIX system from Bell, Linux and Apple's Macintosh - MAC
OS. Learn about the definition, functions and fundamentals of operating systems for
computers, different types of operating systems, popular operating system comparisons,
operating system market share and more.

Software Validation Testing

While verification is a quality control process, quality assurance process carried out
before the software is ready for release is known as validation testing. The validation
testing goals is to validate and be confident about the software product or system, that it
fulfills the requirements given by the customer. The acceptance of the software from the
end customer is also a part of validation testing.

Validation testing answers the question, "Are you building the right software system".
Another question, which the entire process of validation testing in software engineering
answers is, "Is the deliverable fit for purpose". In other words, does the software system
provide the right solution to the problem. Therefore, often the testing activities are
introduced early in the software development life cycle. The two major areas, when
validation testing should take place are in the early stages of software development and
towards the end, when the product is ready for release. In other words, it is acceptance
testing which is a part of validation testing.

Validation Testing Types

If the testers are involved in the software product right from the very beginning, then
validation testing in software testing starts right after a component of the system has been
developed. The different types of software validation testing are:

Component Testing
Component testing is also known as unit testing. The aim of the tests carried out in this
testing type is to search for defects in the software component. At the same time, it also
verifies the functioning of the different software components, like modules, objects,
classes, etc., which can be tested separately.

Integration Testing
This is an important part of the software validation model, where the interaction between
the different interfaces of the components is tested. Along with the interaction between
the different parts of the system, the interaction of the system with the computer
operating system, file system, hardware and any other software system it might interact
with is also tested.

System Testing
System testing, also known as functional and system testing is carried out when the entire
software system is ready. The concern of this testing is to check the behavior of the
whole system as defined by the scope of the project. The main concern of system testing
is to verify the system against the specified requirements. While carrying out the tester is
not concerned with the internals of the system, but checks if the system behaves as per
expectations.

Acceptance Testing
Here the tester especially has to literally think like the client and test the software with
respect to user needs, requirements and business processes and determine, whether the
software can be handed over to the client. At this stage, often a client representative is
also a part of the testing team, so that the client has confidence in the system. There are
different types of acceptance testing:

• Operational Acceptance Testing


• Compliance Acceptance Testing
• Alpha Testing
• Beta Testing

Often when validation testing interview questions are asked, they revolve around the
different types of validation testing. The difference between verification and validation is
also a common software validation testing question. Some organizations may use
different terms for some of the terms given in the article above. As far as possible, I have
tried to accept the alternate names as well.

A perfect software product is built when every step is taken with full consideration that
‘A right product is developed in a right manner’. ‘Software Verification & Validation’ is
one such model, which helps the system designers and test engineers to confirm that a
right product is build right way throughout the development process and improve the
quality of the software product.

‘Verification & Validation Model’ makes it sure that, certain rules are followed at the
time of development of a software product and also makes it sure that the product that is
developed fulfills the required specifications. This reduces the risk associated with any
software project up to certain level by helping in detection and correction of errors and
mistakes, which are unknowingly done during the development process.

What is Verification?
The standard definition of Verification goes like this: "Are we building the product
RIGHT?" i.e. Verification is a process that makes it sure that the software product is
developed the right way. The software should confirm to its predefined specifications, as
the product development goes through different stages, an analysis is done to ensure that
all required specifications are met.

Methods and techniques used in the Verification and Validation shall be designed
carefully, the planning of which starts right from the beginning of the development
process. The Verification part of ‘Verification and Validation Model’ comes before
Validation, which incorporates Software inspections, reviews, audits, walkthroughs,
buddy checks etc. in each phase of verification (every phase of Verification is a phase of
the Testing Life Cycle)

During the Verification, the work product (the ready part of the Software being
developed and various documentations) is reviewed/examined personally by one ore
more persons in order to find and point out the defects in it. This process helps in
prevention of potential bugs, which may cause in failure of the project.

Few terms involved in Verification:


Inspection:
Inspection involves a team of about 3-6 people, led by a leader, which formally reviews
the documents and work product during various phases of the product development life
cycle. The work product and related documents are presented in front of the inspection
team, the member of which carry different interpretations of the presentation. The bugs
that are detected during the inspection are communicated to the next level in order to take
care of them.

Walkthroughs:
Walkthrough can be considered same as inspection without formal preparation (of any
presentation or documentations). During the walkthrough meeting, the presenter/author
introduces the material to all the participants in order to make them familiar with it. Even
when the walkthroughs can help in finding potential bugs, they are used for knowledge
sharing or communication purpose.

Buddy Checks:
This is the simplest type of review activity used to find out bugs in a work product during
the verification. In buddy check, one person goes through the documents prepared by
another person in order to find out if that person has made mistake(s) i.e. to find out bugs
which the author couldn’t find previously.

The activities involved in Verification process are: Requirement Specification


verification, Functional design verification, internal/system design verification and code
verification (these phases can also subdivided further). Each activity makes it sure that
the product is developed right way and every requirement, every specification, design
code etc. is verified!

Usability Testing:
As the term suggest, usability means how better something can be used over the purpose
it has been created for. Usability testing means a way to measure how people
(intended/end user) find it (easy, moderate or hard) to interact with and use the system
keeping its purpose in mind. It is a standard statement that "Usability testing measures the
usability of the system".

Why Do We Need Usability Testing?


Usability testing is carried out in order to find out if there is any change needs to be
carried out in the developed system (may it be design or any specific procedural or
programmatic change) in order to make it more and more user friendly so that the
intended/end user who is ultimately going to buy and use it receives the system which he
can understand and use it with utmost ease.

Any changes suggested by the tester at the time of usability testing, are the most crucial
points that can change the stand of the system in intended/end user’s view.
Developer/designer of the system need to incorporate the feedbacks (here feedback can
be a very simple change in look and feel or any complex change in the logic and
functionality of the system) of usability testing into the design and developed code of the
system (the word system may be a single object or an entire package consisting more
than one objects) in order to make system more and more presentable to the intended/end
user.

Developer often try to make the system as good looking as possible and also tries to fit
the required functionality, in this endeavor he may have forgotten some error prone
conditions which are uncovered only when the end user is using the system in real time.
Usability testing helps developer in studying the practical situations where the system
will be used in real time. Developer also gets to know the areas that are error prone and
the area of improvement.

In simple words, usability testing is an in-house dummy-release of the system before the
actual release to the end users, where developer can find and fix all possible loop holes.

How Usability Test Is Carried Out?


Usability test, as mentioned above is an in-house dummy release before the actual release
of the system to the intended/end user. Hence, a setup is required in which developer and
testers try to replicate situations as realistic as possible to project the real time usage of
the system. The testers try to use the system in exactly the same manner that any end user
can/will do. Please note that, in this type of testing also, all the standard instruction of
testing are followed to make it sure that the testing is done in all the directions such as
functional testing, system integration testing, unit testing etc.

The outcome/feedback is noted down based on observations of how the user is using the
system and what are all the possible ways that also may come into picture, and also based
on the behavior of the system and how easy/hard it is for the user to operate/use the
system. User is also asked for his/her feedback based on what he/she thinks should be
changed to improve the user interaction between the system and the end user.

Usability testing measures various aspects such as:


How much time the tester/user and system took to complete basic flow?
How much time people take to understand the system (per object) and how many
mistakes they make while performing any process/flow of operation?
How fast the user becomes familiar with the system and how fast he/she can recall the
system’s functions?
And the most important: how people feel when they are using the system?

Over the time period, many people have formulated various measures and models for
performing usability testing. Any of the models can be used to perform the test.

Advantages of Usability Testing

• Usability test can be modified to cover many other types of testing such as
functional testing, system integration testing, unit testing, smoke testing etc. (with
keeping the main objective of usability testing in mind) in order to make it sure
that testing is done in all the possible directions.
• Usability testing can be very economical if planned properly, yet highly effective
and beneficial.
• If proper resources (experienced and creative testers) are used, usability test can
help in fixing all the problems that user may face even before the system is finally
released to the user. This may result in better performance and a standard system.
• Usability testing can help in uncovering potential bugs and potholes in the system
which generally are not visible to developers and even escape the other type of
testing.

Usability testing is a very wide area of testing and it needs fairly high level of
understanding of this field along with creative mind. People involved in the usability
testing are required to possess skills like patience, ability to listen to the suggestions,
openness to welcome any idea, and the most important of them all is that they should
have good observation skills to spot and fix the problems on fly.

Das könnte Ihnen auch gefallen