Sie sind auf Seite 1von 10



Faculty Name: Mrs. Garima Student Name: Nekkunj Pilani

Roll No.: 40114802716

Semester: VII
Group: 7-C-7


PSP Area, Plot No. 1, Sector - 22, Rohini, Delhi-110086

Q1: What is software testing and explain its principles. (Unit 1)
Explain different types of black box testing techniques in detail. (Unit 2)
How to plan quality of a system. (Unit 3)
What is automated testing and explain different generations of testing tools. (Unit 4)

Ans. Testing is the process of exercising or evaluating a system or system component by manual
or automated means to verify that it satisfies specified requirements. In other terms, Software
testing is the process of executing a program or system with the intent of finding errors. It involves
any activity aimed at evaluating an attribute or capability of a program and determining that it
meets it required results.

Principles of Software Testing

1. Testing should be based on user requirements. This is in order to uncover any defects that
might cause the program or system to fail to meet the client’s requirements
2. Testing time and resources are limited. Avoid redundant tests
3. Exhaustive testing is impossible. As stated, it is impossible to test everything due to huge
data space and large number of paths that a program flow might take.
4. Use effective resources to test. This represents use of the most suitable tools, procedures
and individuals to conduct the tests
5. Test planning should be done early. This is because test planning can begin independently
of coding and as soon as the client requirements are set.
6. Testing should begin in small and progress toward testing in large. The smallest
programming units should be tested first and then expired to other parts of the system.
7. Testing should be conducted by an independent third party
8. All tests should be traceable to customer requirements
9. Assign best person for testing. Avoid programmers
10. Test should be planned to show software defects and not their absence

Different types of black box testing techniques are:

1. BVA (Boundary value analysis): It is a black box testing technique that believes and
extends the concept that the density of defect is more towards the boundaries. The basic
idea of BVA is to use input variable values at their minimum, just above the minimum a
nominal value just below their maximum and all their maximum. BVA is based upon a
critical assumption that is known as single fault assumption theory. According to this
assumption we derive the test cases on the basis of the fact that failures are not due to
simultaneous occurrence of two faults. So, derive test cases by holding the values of all but
one variable assumes its extreme values. For a function of n variables BVA yields (4n + 1
) test cases
2. Robustness Testing: Another variant to BVA testing is robustness testing. In BVA we are
within the legitimate boundary of our range. That is we consider the following values of
testing: (min, min+,nom,max-,max) whereas in robustness testing we try to cross these
legitimate boundaries also. So now we consider these values of missing:
(min-,min,min+,norm,max-,max,max+).Again with robustness testing we can focus on
exception handling. With strongly typed languages, robustness testing may be very
awkward. For a program with n variables robustness testing will yields (6n + 1) test cases

3. Worst Case Testing. If we reject our basic assumption of single fault assumption theory
and focus on what happens when we reject this theory- it simply means that we want to see
that what happens when more than one variable has an extreme value. This is multiple path
assumption theory. In electronic circuit analysis, this is called as “worst case analysis”.
For each variable, start with the five element set that contains the min,min+,nom,max- and
max values. Then take the cartesian product of these sets to generate test cases. For a
program of n-variables,5th test cases are generated. Robust-worst case testing yields 7^n
test cases.

4. Equivalence class testing: In this technique, the input and the output domain is divided into
a finite number of equivalence classes, then we select one representative of each class and
test our program against it. It is assumed by the tester that if one representative from a class
is able to detect error why should he consider other classes. Furthermore, if the single
representative test case did not detect any error, the we assume one other test case of this
test case can detect error. In this method we consider both valid and invalid input domains.
The system is still treated as a black box.
The following guidelines are helpful for equivalence class testing
1) The weak forms of equivalence class testing (normal or robust) are not as comprehensive
as the corresponding strong forms.
2) If the implementation language is strongly typed and invalid values cause run-time errors
then there is no point in using the robust form.
3) If error conditions are a high priority, the robust forms are appropriate.
4) Equivalence class testing is approximate when input data is defined in terms of intervals
and sets of discrete values. This is certainly the case when system malfunctions can occur
for out-of-limit variable values.
5) Equivalence class testing is strengthened by a hybrid approach with boundary value
testing (BVA).


A quality plan is a document, or several documents, that together specify quality standards,
practices, resources, specifications, and the sequence of activities relevant to a particular product,
service, project, or contract. Quality plans should define:

 Objectives to be attained (for example, characteristics or specifications, uniformity, effectiveness,

aesthetics, cycle time, cost, natural resources, utilization, yield, dependability, and so on)
 Steps in the processes that constitute the operating practice or procedures of the organization
 Allocation of responsibilities, authority, and resources during the different phases of the process
or project
 Specific documented standards, practices, procedures, and instructions to be applied
 Suitable testing, inspection, examination, and audit programs at appropriate stages
 A documented procedure for changes and modifications to a quality plan as a process is improved
 A method for measuring the achievement of the quality objectives
 Other actions necessary to meet the objectives

At the highest level, quality goals and plans should be integrated with overall strategic plans of the
organization. As organizational objectives and plans are deployed throughout the organization,
each function fashions its own best way for contributing to the top-level goals and objectives.
The steps of Quality Planning are:

S1: Definition of processes, procedures, standards and guidelines for developing and testing
software. Reviewing of these standards is also to be done as it improves software process maturity.

S2: Performing quality operations like verification and validation as per process definition during
entire SDLC.

S3: Regular audits are t be done

S4: Collecting metrices and defining action plans for the weaker software processes is also to be


1st Generation: Record and Playback

A framework should be something almost entirely coded. Some people call this type of automation
a Linear Scripting Framework.

Testers don’t need to write code to create functions and the steps are written in a sequential order.
In this process, the tester records each step such as navigation, user input, or checkpoints, and then
plays the script back automatically to conduct the test.

Pros and Cons

However, maintaining a large number of these tests is almost impossible. It is a big challenge to
automate more complex scenarios. It is a headache to integrate these tests into a CI process, get
proper reporting, and configure it for different working environments.

2nd Generation: Modular and Data-Driven Frameworks

In record and playback scripts, the data was hard-coded and more complex scenarios were almost
impossible to write. This is why most vendors and open-source tools started to support an export
to code option, where your recorded test is exported to a popular programming language where
you can edit and modify it.

Pros and Cons

Since these frameworks are coded, it makes the maintenance a lit bit easier, since you can use the
power of programming IDEs to fix the tests. Also, they significantly increase the reuse of logic.

3rd Generation: Library and Keyword-Driven Frameworks

The Library Architecture Testing Framework is fundamentally and foundationally built on

Module-Based Testing Frameworks, with some additional advantages. Instead of dividing the
application under test into test scripts, we segregate the application into functions, or rather,
common functions. The basic fundamental behind the framework is to determine the common
steps, group them into functions under a library, and call those functions in the test scripts
whenever required.

Pros and Cons

Though the library-based frameworks much more test code can be reused. The fixes start to happen
in a single place.

However, with time if there are slight modifications to the workflow in some of the apps, it gets
harder and harder to keep the abstraction good. Writing more abstract code requires engineers to
have more in-depth programming knowledge. Because of the higher abstraction, the tests get even
harder to read.

Q2: Explain quality assurance (Unit 1) and how to improve the quality of the software
using six sigma. (Unit 3)
Explain different levels of testing. (Unit 2)
Define the regression testing process with its types.(Unit 4)

Quality Assurance is popularly known as QA Testing, is defined as an activity to ensure that an organization
is providing the best possible product or service to customers. QA focuses on improving the processes to
deliver Quality Products to the customer.
Quality assurance has a defined cycle called PDCA cycle or Deming cycle. The phases of this cycle are:

 Plan - Organization should plan and establish the process related objectives and determine the
processes that are required to deliver a high-Quality end product.
 Do - Development and testing of Processes and also "do" changes in the processes
 Check - Monitoring of processes, modify the processes, and check whether it meets the
predetermined objectives
 Act - Implement actions that are necessary to achieve improvements in the processes


Six Sigma is a quality management methodology used to help businesses improve current processes,
products or services by discovering and eliminating defects. The goal is to streamline quality control in
manufacturing or business processes so there is little to no variance throughout.

Six Sigma was trademarked by Motorola in 1993, but it references the Greek letter sigma, which is a
statistical symbol that represents a standard deviation. Motorola used the term because a Six Sigma process
is expected to be defect-free 99.99966 percent of the time — allowing for 3.4 defective features for every
million opportunities. Motorola initially set this goal for its own manufacturing operations, but it quickly
became a buzzword and widely adopted standard.

Six Sigma principles

The goal in any Six Sigma project is to identify and eliminate any defects that are causing variations in
quality by defining a sequence of steps around a certain target. The most common examples you’ll find use
the targets “smaller is better, larger is better or nominal is best.”

 Smaller is Better creates an upper specification limit, such as having a target of zero for defects or rejected
 Larger is Better involves a “lower specification limit,” such as test scores — where the target is 100 percent.
 Nominal is Best looking at the middle ground — a customer service rep needs to spend enough time on the
phone to troubleshoot a problem, but not so long that they lose productivity.
The process aims to bring data and statistics into the mesh to help objectively identify errors and defects
that will impact quality. It’s designed to fit a variety of business goals, allowing organizations to define
objectives around specific industry needs.


There are many different testing levels which help to check behavior and performance for software testing.
These testing levels are designed to recognize missing areas and reconciliation between the development
lifecycle states. In SDLC models there are characterized phases such as requirement gathering, analysis,
design, coding or execution, testing, and deployment.

All these phases go through the process of software testing levels. There are mainly four testing levels are:

Each of these testing levels has a specific purpose. These testing level provide value to the software
development lifecycle.

1) Unit testing:

A Unit is a smallest testable portion of system or application which can be compiled, liked, loaded, and
executed. This kind of testing helps to test each module separately.

The aim is to test each part of the software by separating it. It checks that component are fulfilling
functionalities or not. This kind of testing is performed by developers.

2) Integration testing:

Integration means combining. For Example, In this testing phase, different software modules are combined
and tested as a group to make sure that integrated system is ready for system testing.

Integrating testing checks the data flow from one module to other modules. This kind of testing is performed
by testers.

3) System testing:

System testing is performed on a complete, integrated system. It allows checking system's compliance as
per the requirements. It tests the overall interaction of components. It involves load, performance, reliability
and security testing.
System testing most often the final test to verify that the system meets the specification. It evaluates both
functional and non-functional need for the testing.

4) Acceptance testing:

Acceptance testing is a test conducted to find if the requirements of a specification or contract are met as
per its delivery. Acceptance testing is basically done by the user or customer. However, other stockholders
can be involved in this process.

Process of Regression testing:

Firstly, whenever we make some changes to the source code for any reasons like adding new functionality,
optimization, etc. then our program when executed fails in the previously designed test suite for obvious
reasons. After the failure, the source code is debugged in order to identify the bugs in the program. After
identification of the bugs in the source code, appropriate modifications are made. Then appropriate test
cases are selected from the already existing test suite which covers all the modified and affected parts of
the source code. We can add new test cases if required. In the end regression testing is performed using the
selected test cases.

Techniques for the selection of Test cases for Regression Testing:

 Select all test cases: In this technique, all the test cases are selected from the already existing test
suite. It is the most simple and safest technique but not much efficient.
 Select test cases randomly: In this technique, test cases are selected randomly from the existing test-
suite but it is only useful if all the test cases are equally good in their fault detection capability which
is very rare. Hence, it is not used in most of the cases.
 Select modification traversing test cases: In this technique, only those test cases are selected which
covers and tests the modified portions of the source code the parts which are affected by these
 Select higher priority test cases: In this technique, priority codes are assigned to each test case of
the test suite based upon their bug detection capability, customer requirements, etc. After assigning
the priority codes, test cases with highest priorities are selected for the process of regression testing.
Test case with highest priority has highest rank. For example, test case with priority code 2 is less
important than test case with priority code 1.
Types of Regression Testing

1) Corrective Regression Testing:

This type of testing is used when there are no changes introduced in the product’s specification. Moreover,
the already existing test cases can be easily reused to conduct the desired test.

2) Retest-all Regression Testing:

This type of testing is very tedious and tends to waste a lot of time.

The strategy involves the testing of all aspects of a particular product as well as reusing all test cases even
where the changes/modifications have not been made.

This type of testing is not at all advisable when there is a small change, that has been introduced in the
existing product.

3) Selective Regression Testing:

It is done to analyze the impact of new code added to the already existing code of the software.

When this type of regression testing is conducted, a subset from the existing test cases is used, to reduce
the effort required for retesting and the cost involved.

For example, a test unit is re-run in case there is some change incorporated in the program entities such as
functions and variables.

4)Progressive Regression Testing:

This type of regression testing works effectively when there are certain changes done in the program
specifications as well as new test cases are designed. Conducting this testing helps in ensuring that, there
are no features that exist in the previous version that has been compromised in the new and updated version.