Beruflich Dokumente
Kultur Dokumente
• Functionality Testing
• Performance Testing
• Usability Testing
• Server Side Interface
• Client Side Compatibility
• Security
Functionality:
In testing the functionality of the web sites the following should be tested:
• Links
i. Internal Links
ii. External Links
iii. Mail Links
iv. Broken Links
• Forms
i. Field validation
ii. Error message for wrong input
iii. Optional and Mandatory fields
• Database
* Testing will be done on the database integrity.
• Cookies
* Testing will be done on the client system side, on the temporary Internet files.
Performance :
Performance testing can be applied to understand the web site’s scalability, or to benchmark
the performance in the environment of third party products such as servers and middleware
for potential purchase.
• Connection Speed:
Tested over various networks like Dial Up, ISDN etc
• Load:
i. What is the no. of users per time?
ii. Check for peak loads and how system behaves
iii. Large amount of data accessed by user
• Stress:
i. Continuous Load
ii. Performance of memory, CPU, file handling etc..
Usability:
Usability testing is the process by which the human-computer interaction characteristics of a
system are measured, and weaknesses are identified for correction.
• Ease of learning
• Navigation
• Subjective user satisfaction
• General appearance
Server Side Interface:
In web testing the server side interface should be tested. This is done by verify that
communication is done properly. Compatibility of server with software, hardware, network
and database should be tested.
Client Side Compatibility:
The client side compatibility is also tested in various platforms, using various browsers etc.
Security:
The primary reason for testing the security of a web is to identify potential vulnerabilities
and subsequently repair them.
• Network Scanning
• Vulnerability Scanning
• Password Cracking
• Log Review
• Integrity Checkers
• Virus Detection
Remember these ten rules and I am sure you will definitely gain very good testing
skill.
Client used many ambiguous terms, which were having many different meanings, making it
difficult to analyze the exact meaning. The next version of the requirement doc from client
was clear enough to freeze for design phase.
Specifications should state both type of requirements i.e. what system should do and what
should not.
Generally I use my own method to uncover the unspecified requirements. When I read
the software requirements specification document (SRS), I note down my own
understanding of the requirements that are specified, plus other requirements SRS
document should supposed to cover. This helps me to ask the questions about unspecified
requirements making it clearer.
For checking the requirements completeness, divide requirements in three sections, ‘Must
implement’ requirements, requirements those are not specified but are ‘assumed’ and third
type is ‘imagination’ type of requirements. Check if all type of requirements are addressed
before software design phase.
What is BVT?
Build Verification test is a set of tests run on every new build to verify that build is testable
before it is released to test team for further testing. These test cases are core functionality
test cases that ensure application is stable and can be tested thoroughly. Typically BVT
process is automated. If BVT fails that build is again get assigned to developer for fix.
What is the main task in build release? Obviously file ‘check in’ i.e. to include all the
new and modified project files associated with respective builds. BVT was primarily
introduced to check initial build health i.e. to check whether – all the new and modified files
are included in release, all file formats are correct, every file version and language, flags
associated with each file.
These basic checks are worth before build release to test team for testing. You will save
time and money by discovering the build flaws at the very beginning using BVT.
Which test cases should be included in BVT?
This is very tricky decision to take before automating the BVT task. Keep in mind that
success of BVT depends on which test cases you include in BVT.
Here are some simple tips to include test cases in your BVT automation suite:
Include only critical test cases in BVT.
All test cases included in BVT should be stable.
All the test cases should have known expected result.
Make sure all included critical functionality test cases are sufficient for application
test coverage.
Also do not includes modules in BVT, which are not yet stable. For some under-development
features you can’t predict expected behavior as these modules are unstable and you might
know some known failures before testing for these incomplete modules. There is no point
using such modules or test cases in BVT.
You can make this critical functionality test cases inclusion task simple by communicating
with all those involved in project development and testing life cycle. Such process should
negotiate BVT test cases, which ultimately ensure BVT success. Set some BVT quality
standards and these standards can be met only by analyzing major project features and
scenarios.
Example: Test cases to be included in BVT for Text editor application (Some sample
tests only):
1) Test case for creating text file.
2) Test cases for writing something into text editor
3) Test case for copy, cut, paste functionality of text editor
4) Test case for opening, saving, deleting text file.
These are some sample test cases, which can be marked as ‘critical’ and for every minor or
major changes in application these basic critical test cases should be executed. This task
can be easily accomplished by BVT.
BVT automation suits needs to be maintained and modified time-to-time. E.g. include test
cases in BVT when there are new stable project modules available.
If you want to detect and resolve the defect in early development stage, defect tracking and
software development phases should start simultaneously.
We will discuss more on Writing effective bug report in another article. Let’s concentrate
here on bug/defect life cycle.
[Click on the image to view full size] Ref: Bugzilla bug life cycle
On successful logging the bug is reviewed by Development or Test manager. Test manager
can set the bug status as Open, can Assign the bug to developer or bug may be deferred
until next release.
When bug gets assigned to developer and can start working on it. Developer can set bug
status as won’t fix, Couldn’t reproduce, Need more information or ‘Fixed’.
If the bug status set by developer is either ‘Need more info’ or Fixed then QA responds with
specific action. If bug is fixed then QA verifies the bug and can set the bug status as verified
closed or Reopen.
Black box testing occurs throughout the software development and Testing life cycle i.e in
Unit, Integration, System, Acceptance and regression testing stages.
This is the method of verification. Verifying that the bugs are fixed and the newly added
feature have not created in problem in previous working version of software.
Why regression Testing?
Regression testing is initiated when programmer fix any bug or add new code for new
functionality to the system. It is a quality measure to check that new code complies with old
code and unmodified code is not getting affected.
Most of the time testing team has task to check the last minute changes in the system. In
such situation testing only affected application area in necessary to complete the testing
process in time with covering all major system aspects.
How much regression testing?
This depends on the scope of new added feature. If the scope of the fix or feature is large
then the application area getting affected is quite large and testing should be thoroughly
including all the application test cases. But this can be effectively decided when tester gets
input from developer about the scope, nature and amount of change.
What we do in regression testing?
Rerunning the previously conducted tests
Comparing current results with previously executed test results.
Regression Testing Tools:
Automated Regression testing is the testing area where we can automate most of the
testing efforts. We run all the previously executed test cases this means we have test case
set available and running these test cases manually is time consuming. We know the
expected results so automating these test cases is time saving and efficient regression
testing method. Extent of automation depends on the number of test cases that are going
to remain applicable over the time. If test cases are varying time to time as application
scope goes on increasing then automation of regression procedure will be the waste of time.
Most of the regression testing tools are record and playback type. Means you will record the
test cases by navigating through the AUT and verify whether expected results are coming or
not.
Example regression testing tools are:
Winrunner
QTP
AdventNet QEngine
Regression Tester
vTest
Watir
Selenium
actiWate
Rational Functional Tester
SilkTest
Most of the tools are both Functional as well as regression testing tools.
Regression Testing Of GUI application:
It is difficult to perform GUI(Graphical User Interface) regression testing when GUI structure
is modified. The test cases written on old GUI either becomes obsolete or need to reuse.
Reusing the regression testing test cases means GUI test cases are modified according to
new GUI. But this task becomes cumbersome if you have large set of GUI test cases.
Testing Types
ACCEPTANCE TESTING
Testing to verify a product meets customer specified requirements. A customer usually does this type of
testing on a product that is developed externally.
BLACK BOX TESTING
Testing without knowledge of the internal workings of the item being tested. Tests are usually functional.
COMPATIBILITY TESTING
Testing to ensure compatibility of an application or Web site with different browsers, OSs, and hardware
platforms. Compatibility testing can be performed manually or can be driven by an automated functional or
regression test suite.
CONFORMANCE TESTING
Verifying implementation conformance to industry standards. Producing tests for the behavior of an
implementation to be sure it provides the portability, interoperability, and/or compatibility a standard
defines.
FUNCTIONAL TESTING
Validating an application or Web site conforms to its specifications and correctly performs all its required
functions. This entails a series of tests which perform a feature by feature validation of behavior, using a
wide range of normal and erroneous input data. This can involve testing of the product's user interface,
APIs, database management, security, installation, networking, etcF testing can be performed on an
automated or manual basis using black box or white box methodologies.
INTEGRATION TESTING
Testing in which modules are combined and tested as a group. Modules are typically code modules,
individual applications, client and server applications on a network, etc. Integration Testing follows unit
testing and precedes system testing.
LOAD TESTING
Load testing is a generic term covering Performance Testing and Stress Testing.
PERFORMANCE TESTING
Performance testing can be applied to understand your application or WWW site's scalability, or to
benchmark the performance in an environment of third party products such as servers and middleware for
potential purchase. This sort of testing is particularly useful to identify performance bottlenecks in high use
applications. Performance testing generally involves an automated test suite as this allows easy simulation
of a variety of normal, peak, and exceptional load conditions.
REGRESSION TESTING
Similar in scope to a functional test, a regression test allows a consistent, repeatable validation of each new
release of a product or Web site. Such testing ensures reported product defects have been corrected for
each new release and that no new quality problems were introduced in the maintenance process. Though
regression testing can be performed manually an automated test suite is often used to reduce the time and
resources needed to perform the required testing.
SMOKE TESTING
A quick-and-dirty test that the major functions of a piece of software work without bothering with finer
details. Originated in the hardware testing practice of turning on a new piece of hardware for the first time
and considering it a success if it does not catch on fire.
STRESS TESTING
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements
to determine the load under which it fails and how. A graceful degradation under load leading to non-
catastrophic failure is the desired result. Often Stress Testing is performed using the same process as
Performance Testing but employing a very high level of simulated load.
SYSTEM TESTING
Testing conducted on a complete, integrated system to evaluate the system's compliance with its specified
requirements. System testing falls within the scope of black box testing, and as such, should require no
knowledge of the inner design of the code or logic.
UNIT TESTING
Functional and reliability testing in an Engineering environment. Producing tests for the behavior of
components of a product to ensure their correct behavior prior to system integration.
WHITE BOX TESTING
Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques
such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing.