Sie sind auf Seite 1von 17

WEB TESTING

While testing a web application you need to consider following Cases: 

• Functionality Testing
• Performance Testing
• Usability Testing
• Server Side Interface
• Client Side Compatibility
• Security
Functionality:
In testing the functionality of the web sites the following should be tested:
• Links
i. Internal Links
ii. External Links
iii. Mail Links
iv. Broken Links
• Forms
i. Field validation
ii. Error message for wrong input
iii. Optional and Mandatory fields
• Database
* Testing will be done on the database integrity.
• Cookies
* Testing will be done on the client system side, on the temporary Internet files.
Performance :
Performance testing can be applied to understand the web site’s scalability, or to benchmark
the performance in the environment of third party products such as servers and middleware
for potential purchase.
• Connection Speed:
Tested over various networks like Dial Up, ISDN etc
• Load:
i. What is the no. of users per time?
ii. Check for peak loads and how system behaves
iii. Large amount of data accessed by user
• Stress:
i. Continuous Load
ii. Performance of memory, CPU, file handling etc..
Usability:
Usability testing is the process by which the human-computer interaction characteristics of a
system are measured, and weaknesses are identified for correction.
• Ease of learning
• Navigation
• Subjective user satisfaction
• General appearance
Server Side Interface:
In web testing the server side interface should be tested. This is done by verify that
communication is done properly. Compatibility of server with software, hardware, network
and database should be tested.
Client Side Compatibility:
The client side compatibility is also tested in various platforms, using various browsers etc.
Security: 
The primary reason for testing the security of a web is to identify potential vulnerabilities
and subsequently repair them.
• Network Scanning
• Vulnerability Scanning
• Password Cracking
• Log Review
• Integrity Checkers
• Virus Detection

Way to become a good tester:

Remember these ten rules and I am sure you will definitely gain very good testing
skill.

1. Test early and test often.


2. Integrate the application development and testing life cycles. You’ll get better
results and you won’t have to mediate between two armed camps in your IT shop.
3. Formalize a testing methodology; you’ll test everything the same way and you’ll
get uniform results.
4. Develop a comprehensive test plan; it forms the basis for the testing
methodology.
5. Use both static and dynamic testing.
6. Define your expected results.
7. Understand the business reason behind the application. You’ll write a better
application and better testing scripts.
8. Use multiple levels and types of testing (regression, systems, integration, stress
and load).
9. Review and inspect the work, it will lower costs.
10. Don’t let your programmers check their own work; they’ll miss their own errors.
*For success of any project test estimation and proper execution is equally important as the
development cycle. Sticking to the estimation is very important to build good
reputation with the client.
Experience play major role in estimating “software testing efforts”. Working on varied
projects helps to prepare an accurate estimation for the testing cycle. Obviously one cannot
just blindly put some number of days for any testing task. Test estimation should be
realistic and accurate.
In this article I am trying to put some points in a very simple manner, which are helpful to
prepare good test estimations. I am not going to discuss the standard methods for test
estimations like testing metrics, instead I am putting some tips on – How to estimate
testing efforts for any testing task, which I learned from my experience.
Factors Affecting Software Test Estimation, and General Tips to Estimate
Accurately:
1) Think of Some Buffer Time
The estimation should include some buffer. But do not add a buffer, which is not realistic.
Having a buffer in the estimation enables to cope for any delays that may occur. Having a
buffer also helps to ensure maximum test coverage.
2) Consider the Bug Cycle
The test estimation also includes the bug cycle.  The actual test cycle may take more days
than estimated. To avoid this, we should consider the fact that test cycle depends on the
stability of the build. If the build is not stable, then developers may need more time to fix
and obviously the testing cycle gets extended automatically.
3) Availability of All the Resources for Estimated Period
The test estimation should consider all the leaves planned by the team members (typically
long leaves) in the next few weeks or next few months. This will ensure that the estimations
are realistic. The estimation should consider some fixed number of resources for test cycle.
If the number of resources reduces then the estimation should be re-visited and updated
accordingly.
4) Can We Do Parallel Testing?
Do you have some previous versions of same product so that you can compare the output?
If yes, then this can make your testing task bit easier. You should think the estimation
based on your product version.
5) Estimations Can Go Wrong – So re-visit the estimations frequently in initial
stages before you commit it. 
In early stages, we should frequently re-visit the test estimations and make modification if
needed. We should not extend the estimation once we freeze it, unless there are major
changes in requirement.
6) Think of Your Past Experience to Make Judgments!
Experiences from past projects play a vital role while preparing the time estimates. We can
try to avoid all the difficulties or issues that were faced in past projects. We can analyze
how the previous estimates were and how much they helped to deliver product on time.
7) Consider the Scope of Project
Know what is the end objective of the project and list of all final deliverables. Factors to be
considered for small and large projects differ a lot. Large project, typically include setting up
test bed, generating test data, test scripts etc. Hence the estimations should be based on all
these factors. Whereas in small projects, typically the test cycle include test cases writing,
execution and regression.
8 ) Are You Going to Perform Load Testing?
If you need to put considerable time on performance testing then estimate accordingly.
Estimations for projects, which involve load testing, should be considered differently.
9) Do You Know Your Team?
If you know strengths and weaknesses of individuals working in your team then you can
estimate testing tasks more precisely. While estimating one should consider the fact that all
resources may not yield same productivity level. Some people can execute faster compared
to others. Though this is not a major factor but it adds up to the total delay in deliverables.
How to test software requirements specification (SRS)?

Do you know “Most of the bugs in software are due to incomplete or inaccurate


functional requirements?”  The software code, doesn’t matter how well it’s written, can’t
do anything if there are ambiguities in requirements.
It’s better to catch the requirement ambiguities and fix them in early development life cycle.
Cost of fixing the bug after completion of development or product release is too high.  So
it’s important to have requirement analysis and catch these incorrect requirements before
design specifications and project implementation phases of SDLC.

How to measure functional software requirement specification (SRS) documents?


Well, we need to define some standard tests to measure the requirements. Once each
requirement is passed through these tests you can evaluate and freeze the functional
requirements.
Let’s take an example. You are working on a web based application. Requirement is as
follows:
“Web application should be able to serve the user queries as early as possible”
How will you freeze the requirement in this case?
What will be your requirement satisfaction criteria? To get the answer, ask this question to
stakeholders: How much response time is ok for you?
If they say, we will accept the response if it’s within 2 seconds, then this is your
requirement measure. Freeze this requirement and carry the same procedure for next
requirement.
We just learned how to measure the requirements and freeze those in design,
implementation and testing phases.
Now let’s take other example. I was working on a web based project. Client (stakeholders)
specified the project requirements for initial phase of the project development. My manager
circulated all the requirements in the team for review. When we started discussion on these
requirements, we were just shocked! Everyone was having his or her own conception about
the requirements. We found lot of ambiguities in the ‘terms’ specified in requirement
documents, which later on sent to client for review/clarification.

Client used many ambiguous terms, which were having many different meanings, making it
difficult to analyze the exact meaning. The next version of the requirement doc from client
was clear enough to freeze for design phase.

From this example we learned “Requirements should be clear and consistent”


Next criteria for testing the requirements specification is “Discover missing
requirements”
Many times project designers don’t get clear idea about specific modules and they simply
assume some requirements while design phase. Any requirement should not be based on
assumptions. Requirements should be complete, covering each and every aspect of the
system under development.

Specifications should state both type of requirements i.e. what system should do and what
should not.

Generally I use my own method to uncover the unspecified requirements. When I read
the software requirements specification document (SRS), I note down my own
understanding of the requirements that are specified, plus other requirements SRS
document should supposed to cover. This helps me to ask the questions about unspecified
requirements making it clearer.
For checking the requirements completeness, divide requirements in three sections, ‘Must
implement’ requirements, requirements those are not specified but are ‘assumed’ and third
type is ‘imagination’ type of requirements. Check if all type of requirements are addressed
before software design phase.

Check if the requirements are related to the project goal.


Some times stakeholders have their own expertise, which they expect to come in system
under development. They don’t think if that requirement is relevant to project in hand.
Make sure to identify such requirements. Try to avoid the irrelevant requirements in first
phase of the project development cycle. If not possible ask the questions to stakeholders:
why you want to implement this specific requirement? This will describe the particular
requirement in detail making it easier for designing the system considering the future
scope.
But how to decide the requirements are relevant or not?
Simple answer: Set the project goal and ask this question: If not implementing this
requirement will cause any problem achieving our specified goal? If not, then this is
irrelevant requirement. Ask the stakeholders if they really want to implement these types of
requirements.
In short requirements specification (SRS) doc should address following:
Project functionality (What should be done and what should not)
Software, Hardware interfaces and user interface
System Correctness, Security and performance criteria
Implementation issues (risks) if any
Conclusion: 
I have covered all aspects of requirement measurement. To be specific about requirements,
I will summarize requirement testing in one sentence:
“Requirements should be clear and specific with no uncertainty, requirements
should be measurable in terms of specific values, requirements should be testable
having some evaluation criteria for each requirement, and requirements should be
complete, without any contradictions”
Testing should start at requirement phase to avoid further requirement related bugs.
Communicate more and more with your stakeholder to clarify all the requirements before
starting project design and implementation.

What you need to know about BVT (Build Verification Testing)

What is BVT?

Build Verification test is a set of tests run on every new build to verify that build is testable
before it is released to test team for further testing. These test cases are core functionality
test cases that ensure application is stable and can be tested thoroughly. Typically BVT
process is automated. If BVT fails that build is again get assigned to developer for fix.

BVT is also called smoke testing or build acceptance testing (BAT)

New Build is checked mainly for two things:


 Build validation
 Build acceptance
Some BVT basics:
 It is a subset of tests that verify main functionalities.
 The BVT’s are typically run on daily builds and if the BVT fails the build is rejected
and a new build is released after the fixes are done.
 The advantage of BVT is it saves the efforts of a test team to setup and test a build
when major functionality is broken.
 Design BVTs carefully enough to cover basic functionality.
 Typically BVT should not run more than 30 minutes.
 BVT is a type of regression testing, done on each and every new build.
BVT primarily checks for the project integrity and checks whether all the modules are
integrated properly or not. Module integration testing is very important when different
teams develop project modules. I heard many cases of application failure due to improper
module integration. Even in worst cases complete project gets scraped due to failure in
module integration.

What is the main task in build release? Obviously file ‘check in’ i.e. to include all the
new and modified project files associated with respective builds. BVT was primarily
introduced to check initial build health i.e. to check whether – all the new and modified files
are included in release, all file formats are correct, every file version and language, flags
associated with each file.
These basic checks are worth before build release to test team for testing. You will save
time and money by discovering the build flaws at the very beginning using BVT.
Which test cases should be included in BVT?
This is very tricky decision to take before automating the BVT task. Keep in mind that
success of BVT depends on which test cases you include in BVT.

Here are some simple tips to include test cases in your BVT automation suite:
 Include only critical test cases in BVT.
 All test cases included in BVT should be stable.
 All the test cases should have known expected result.
 Make sure all included critical functionality test cases are sufficient for application
test coverage.
Also do not includes modules in BVT, which are not yet stable. For some under-development
features you can’t predict expected behavior as these modules are unstable and you might
know some known failures before testing for these incomplete modules. There is no point
using such modules or test cases in BVT.

You can make this critical functionality test cases inclusion task simple by communicating
with all those involved in project development and testing life cycle. Such process should
negotiate BVT test cases, which ultimately ensure BVT success. Set some BVT quality
standards and these standards can be met only by analyzing major project features and
scenarios.

Example: Test cases to be included in BVT for Text editor application (Some sample
tests only):
1) Test case for creating text file.
2) Test cases for writing something into text editor
3) Test case for copy, cut, paste functionality of text editor
4) Test case for opening, saving, deleting text file.

These are some sample test cases, which can be marked as ‘critical’ and for every minor or
major changes in application these basic critical test cases should be executed. This task
can be easily accomplished by BVT.
BVT automation suits needs to be maintained and modified time-to-time. E.g. include test
cases in BVT when there are new stable project modules available.

What happens when BVT suite run:


Say Build verification automation test suite executed after any new build.
1) The result of BVT execution is sent to all the email ID’s associated with that project.
2) The BVT owner (person executing and maintaining the BVT suite) inspects the result of
BVT.
3) If BVT fails then BVT owner diagnose the cause of failure.
4) If the failure cause is defect in build, all the relevant information with failure logs is sent
to respective developers.
5) Developer on his initial diagnostic replies to team about the failure cause. Whether this is
really a bug? And if it’s a bug then what will be his bug-fixing scenario.
6) On bug fix once again BVT test suite is executed and if build passes BVT, the build is
passed to test team for further detail functionality, performance and other testes.

This process gets repeated for every new build.

Why BVT or build fails?


BVT breaks sometimes. This doesn’t mean that there is always bug in the build. There are
some other reasons to build fail like test case coding error, automation suite error,
infrastructure error, hardware failures etc.
You need to troubleshoot the cause for the BVT break and need to take proper action after
diagnosis.

Tips for BVT success:


1) Spend considerable time writing BVT test cases scripts.
2) Log as much detailed info as possible to diagnose the BVT pass or fail result. This will
help developer team to debug and quickly know the failure cause.
3) Select stable test cases to include in BVT. For new features if new critical test case
passes consistently on different configuration then promote this test case in your BVT suite.
This will reduce the probability of frequent build failure due to new unstable modules and
test cases.
4) Automate BVT process as much as possible. Right from build release process to BVT
result – automate everything.
5) Have some penalties for breaking the build   Some chocolates or team coffee party
from developer who breaks the build will do.
Conclusion:
BVT is nothing but a set of regression test cases that are executed each time for new build.
This is also called as smoke test. Build is not assigned to test team unless and until the BVT
passes. BVT can be run by developer or tester and BVT result is communicated throughout
the team and immediate action is taken to fix the bug if BVT fails. BVT process is typically
automated by writing scripts for test cases. Only critical test cases are included in BVT.
These test cases should ensure application test coverage. BVT is very effective for daily as
well as long term builds. This saves significant time, cost, resources and after all no
frustration of test team for incomplete build.
Bug life cycle
What is Bug/Defect?
Simple Wikipedia definition of Bug is: “A computer bug is an error, flaw, mistake,
failure, or fault in a computer program that prevents it from working correctly or produces
an incorrect result. Bugs arise from mistakes and errors, made by people, in either a
program’s source code or its design.”
Other definitions can be:
An unwanted and unintended property of a program or piece of hardware, especially one
that causes it to malfunction.
or
A fault in a program, which causes the program to perform in an unintended or
unanticipated manner.
Lastly the general definition of bug is: “failure to conform to specifications”.

If you want to detect and resolve the defect in early development stage, defect tracking and
software development phases should start simultaneously.

We will discuss more on Writing effective bug report in another article. Let’s concentrate
here on bug/defect life cycle.

Life cycle of Bug:


1) Log new defect
When tester logs any new bug the mandatory fields are:
Build version, Submit On, Product, Module, Severity, Synopsis and Description to Reproduce
In above list you can add some optional fields if you are using manual Bug submission
template:
These Optional Fields are: Customer name, Browser, Operating system, File Attachments or
screenshots.
The following fields remain either specified or blank:
If you have authority to add bug Status, Priority and ‘Assigned to’ fields them you can
specify these fields. Otherwise Test manager will set status, Bug priority and assign the bug
to respective module owner.
Look at the following Bug life cycle:

[Click on the image to view full size] Ref: Bugzilla bug life cycle
On successful logging the bug is reviewed by Development or Test manager. Test manager
can set the bug status as Open, can Assign the bug to developer or bug may be deferred
until next release.

When bug gets assigned to developer and can start working on it. Developer can set bug
status as won’t fix, Couldn’t reproduce, Need more information or ‘Fixed’.

If the bug status set by developer is either ‘Need more info’ or Fixed then QA responds with
specific action. If bug is fixed then QA verifies the bug and can set the bug status as verified
closed or Reopen.

Bug status description:


These are various stages of bug life cycle. The status caption may vary depending on the
bug tracking system you are using.
1) New: When QA files new bug.
2) Deferred: If the bug is not related to current build or can not be fixed in this release or
bug is not important to fix immediately then the project manager can set the bug status as
deferred.
3) Assigned: ‘Assigned to’ field is set by project lead or manager and assigns bug to
developer.
4) Resolved/Fixed: When developer makes necessary code changes and verifies the
changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.
5) Could not reproduce: If developer is not able to reproduce the bug by the steps given
in bug report by QA then developer can mark the bug as ‘CNR’. QA needs action to check if
bug is reproduced and can assign to developer with detailed reproducing steps.
6) Need more information: If developer is not clear about the bug reproduce steps
provided by QA to reproduce the bug, then he/she can mark it as “Need more information’.
In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.
7) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then
QA can mark it as ‘Reopen’ so that developer can take appropriate action.
8 ) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then
QA can mark bug as ‘Closed’.
9) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected or
invalid if the system is working according to specifications and bug is just due to some
misinterpretation.
Testing Checklist: 
1 Create System and Acceptance Tests [ ]
2 Start Acceptance test Creation [ ]
3 Identify test team [ ]
4 Create Workplan [ ]
5 Create test Approach [ ]
6 Link Acceptance Criteria and Requirements to form the basis of
acceptance test [ ]
7 Use subset of system test cases to form requirements portion of
acceptance test [ ]
8 Create scripts for use by the customer to demonstrate that the system meets
requirements [ ]
9 Create test schedule. Include people and all other resources. [ ]
10 Conduct Acceptance Test [ ]
11 Start System Test Creation [ ]
12 Identify test team members [ ]
13 Create Workplan [ ]
14 Determine resource requirements [ ]
15 Identify productivity tools for testing [ ]
16 Determine data requirements [ ]
17 Reach agreement with data center [ ]
18 Create test Approach [ ]
19 Identify any facilities that are needed [ ]
20 Obtain and review existing test material [ ]
21 Create inventory of test items [ ]
22 Identify Design states, conditions, processes, and procedures [ ]
23 Determine the need for Code based (white box) testing. Identify conditions. [ ]
24 Identify all functional requirements [ ]
25 End inventory creation [ ]
26 Start test case creation [ ]
27 Create test cases based on inventory of test items [ ]
28 Identify logical groups of business function for new sysyem [ ]
29 Divide test cases into functional groups traced to test item inventory [ ] 1.30 Design
data sets to correspond to test cases [ ]
31 End test case creation [ ]
32 Review business functions, test cases, and data sets with users [ ]
33 Get signoff on test design from Project leader and QA [ ]
34 End Test Design []
35 Begin test Preparation [ ]
36 Obtain test support resources [ ]
37 Outline expected results for each test case [ ]
38 Obtain test data. Validate and trace to test cases [ ]
39 Prepare detailed test scripts for each test case [ ]
40 Prepare & document environmental set up procedures. Include back up and
recovery plans [ ]
41 End Test Preparation phase [ ]
42 Conduct System Test [ ]
43 Execute test scripts [ ]
44 Compare actual result to expected [ ]
45 Document discrepancies and create problem report [ ]
46 Prepare maintenance phase input [ ]
47 Re-execute test group after problem repairs [ ]
48 Create final test report, include known bugs list [ ]
49 Obtain formal signoff [ ]
Black box testing
Black box testing treats the system as a “black-box”, so it doesn’t explicitly use
Knowledge of the internal structure or code. Or in other words the Test engineer need not
know the internal working of the “Black box” or application.
Main focus in black box testing is on functionality of the system as a whole. The
term ‘behavioral testing’ is also used for black box testing and white box testing is also
sometimes called ’structural testing’. Behavioral test design is slightly different from
black-box test design because the use of internal knowledge isn’t strictly forbidden, but it’s
still discouraged.
Each testing method has its own advantages and disadvantages. There are some bugs that
cannot be found using only black box or only white box. Majority of the applicationa are
tested by black box testing method. We need to cover majority of test cases so that most of
the bugs will get discovered by blackbox testing.

Black box testing occurs throughout the software development and Testing life cycle i.e in
Unit, Integration, System, Acceptance and regression testing stages.

Tools used for Black Box testing:


Black box testing tools are mainly record and playback tools. These tools are used for
regression testing that to check whether new build has created any bug in previous working
application functionality. These record and playback tools records test cases in the form of
some scripts like TSL, VB script, Java script, Perl.
Advantages of Black Box Testing
- Tester can be non-technical.
- Used to verify contradictions in actual system and the specifications.
- Test cases can be designed as soon as the functional specifications are complete
Disadvantages of Black Box Testing
- The test inputs needs to be from large sample space.
- It is difficult to identify all possible inputs in limited testing time. So writing test cases is
slow and difficult
- Chances of having unidentified paths during this testing
Methods of Black box Testing:
Graph Based Testing Methods:
Each and every application is build up of some objects. All such objects are identified and
graph is prepared. From this object graph each object relationship is identified and test
cases written accordingly to discover the errors.
Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the
art of guessing where errors can be hidden. For this technique there are no specific tools,
writing the test cases that cover all the application paths.
Boundary Value Analysis:
Many systems have tendency to fail on boundary. So testing boundry values of application
is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where
the extreme boundary values are chosen. Boundary values include maximum, minimum,
just inside/outside boundaries, typical values, and error values.
Extends equivalence partitioning
Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values
BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis
1. Robustness Testing – Boundary Value Analysis plus values that go beyond the limits
2. Min – 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling
Limitations of Boundary Value Analysis
Boundary value testing is efficient only for variables of fixed values i.e boundary.
Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a
program into classes of data from which test cases can be derived.
How is this partitioning performed while testing:
1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence
classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence
class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.
What is White Box Testing?
White box testing (WBT) is also called Structural or Glass box testing.
White box testing involves looking at the structure of the code. When you know the internal
structure of a product, tests can be conducted to ensure that the internal operations
performed according to the specification. And all internal components have been adequately
exercised.

White Box Testing is coverage of the specification in the code.


Code coverage:
Segment coverage:
Ensure that each code statement is executed once.
Branch Coverage or Node Testing:
Coverage of each code branch in from all possible was.
Compound Condition Coverage:
For multiple condition test each condition with multiple paths and combination of different
path to reach that condition.
Basis Path Testing:
Each independent path in the code is taken for testing.
Data Flow Testing (DFT):
In this approach you track the specific variables through each possible calculation, thus
defining the set of intermediate paths through the code.DFT tends to reflect dependencies
but it is mainly through sequences of data manipulation. In short each data variable is
tracked and its use is verified.
This approach tends to uncover bugs like variables used but not initialize, or declared but
not used, and so on.
Path Testing:
Path testing is where all possible paths through the code are defined and covered. Its a time
consuming task.
Loop Testing:
These strategies relate to testing single loops, concatenated loops, and nested loops.
Independent and dependent code loops and values are tested by this approach.
Why we do White Box Testing?
To ensure:
 That all independent paths within a module have been exercised at least once.
 All logical decisions verified on their true and false values.
 All loops executed at their boundaries and within their operational bounds internal
data structures validity.
Need of White Box Testing?
To discover the following types of bugs:
 Logical error tend to creep into our work when we design and implement functions,
conditions or controls that are out of the program
 The design errors due to difference between logical flow of the program and the
actual implementation
 Typographical errors and syntax checking
Skills Required:
We need to write test cases that ensure the complete coverage of the program logic.
For this we need to know the program well i.e. We should know the specification and the
code to be tested. Knowledge of programming languages and logic.
Limitations of WBT:
Not possible for testing each and every path of the loops in program. This means exhaustive
testing is impossible for large systems.
This does not mean that WBT is not effective. By selecting important logical paths and data
structure for testing is practically possible and effective.

This is the method of verification. Verifying that the bugs are fixed and the newly added
feature have not created in problem in previous working version of software.
Why regression Testing?
Regression testing is initiated when programmer fix any bug or add new code for new
functionality to the system. It is a quality measure to check that new code complies with old
code and unmodified code is not getting affected.
Most of the time testing team has task to check the last minute changes in the system. In
such situation testing only affected application area in necessary to complete the testing
process in time with covering all major system aspects.
How much regression testing?
This depends on the scope of new added feature. If the scope of the fix or feature is large
then the application area getting affected is quite large and testing should be thoroughly
including all the application test cases. But this can be effectively decided when tester gets
input from developer about the scope, nature and amount of change.
What we do in regression testing?
 Rerunning the previously conducted tests
 Comparing current results with previously executed test results.
Regression Testing Tools:
Automated Regression testing is the testing area where we can automate most of the
testing efforts. We run all the previously executed test cases this means we have test case
set available and running these test cases manually is time consuming. We know the
expected results so automating these test cases is time saving and efficient regression
testing method. Extent of automation depends on the number of test cases that are going
to remain applicable over the time. If test cases are varying time to time as application
scope goes on increasing then automation of regression procedure will be the waste of time.
Most of the regression testing tools are record and playback type. Means you will record the
test cases by navigating through the AUT and verify whether expected results are coming or
not.
Example regression testing tools are:
 Winrunner
 QTP
 AdventNet QEngine
 Regression Tester
 vTest
 Watir
 Selenium
 actiWate
 Rational Functional Tester
 SilkTest
Most of the tools are both Functional as well as regression testing tools.
Regression Testing Of GUI application:
It is difficult to perform GUI(Graphical User Interface) regression testing when GUI structure
is modified. The test cases written on old GUI either becomes obsolete or need to reuse.
Reusing the regression testing test cases means GUI test cases are modified according to
new GUI. But this task becomes cumbersome if you have large set of GUI test cases.

Testing Types
ACCEPTANCE TESTING
Testing to verify a product meets customer specified requirements. A customer usually does this type of
testing on a product that is developed externally.
BLACK BOX TESTING
Testing without knowledge of the internal workings of the item being tested. Tests are usually functional.
COMPATIBILITY TESTING
Testing to ensure compatibility of an application or Web site with different browsers, OSs, and hardware
platforms. Compatibility testing can be performed manually or can be driven by an automated functional or
regression test suite.
CONFORMANCE TESTING
Verifying implementation conformance to industry standards. Producing tests for the behavior of an
implementation to be sure it provides the portability, interoperability, and/or compatibility a standard
defines.
FUNCTIONAL TESTING
Validating an application or Web site conforms to its specifications and correctly performs all its required
functions. This entails a series of tests which perform a feature by feature validation of behavior, using a
wide range of normal and erroneous input data. This can involve testing of the product's user interface,
APIs, database management, security, installation, networking, etcF testing can be performed on an
automated or manual basis using black box or white box methodologies.
INTEGRATION TESTING
Testing in which modules are combined and tested as a group. Modules are typically code modules,
individual applications, client and server applications on a network, etc. Integration Testing follows unit
testing and precedes system testing.
LOAD TESTING
Load testing is a generic term covering Performance Testing and Stress Testing.
PERFORMANCE TESTING
Performance testing can be applied to understand your application or WWW site's scalability, or to
benchmark the performance in an environment of third party products such as servers and middleware for
potential purchase. This sort of testing is particularly useful to identify performance bottlenecks in high use
applications. Performance testing generally involves an automated test suite as this allows easy simulation
of a variety of normal, peak, and exceptional load conditions.
REGRESSION TESTING
Similar in scope to a functional test, a regression test allows a consistent, repeatable validation of each new
release of a product or Web site. Such testing ensures reported product defects have been corrected for
each new release and that no new quality problems were introduced in the maintenance process. Though
regression testing can be performed manually an automated test suite is often used to reduce the time and
resources needed to perform the required testing.
SMOKE TESTING
A quick-and-dirty test that the major functions of a piece of software work without bothering with finer
details. Originated in the hardware testing practice of turning on a new piece of hardware for the first time
and considering it a success if it does not catch on fire.
STRESS TESTING
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements
to determine the load under which it fails and how. A graceful degradation under load leading to non-
catastrophic failure is the desired result. Often Stress Testing is performed using the same process as
Performance Testing but employing a very high level of simulated load.
SYSTEM TESTING
Testing conducted on a complete, integrated system to evaluate the system's compliance with its specified
requirements. System testing falls within the scope of black box testing, and as such, should require no
knowledge of the inner design of the code or logic.
UNIT TESTING
Functional and reliability testing in an Engineering environment. Producing tests for the behavior of
components of a product to ensure their correct behavior prior to system integration.
WHITE BOX TESTING
Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques
such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing.

Das könnte Ihnen auch gefallen