Sie sind auf Seite 1von 5

Software Testing Types:

Black box testing - Internal system design is not considered in this type of testing. Tests
are based on requirements and functionality.

White box testing - This testing is based on knowledge of the internal logic of an
application’s code. Also known as Glass box Testing. Internal software and code working
should be known for this type of testing. Tests are based on coverage of code statements,
branches, paths, conditions.

Unit testing - Testing of individual software components or modules. Typically done by


the programmer and not by testers, as it requires detailed knowledge of the internal
program design and code. may require developing test driver modules or test harnesses.

Incremental integration testing - Bottom up approach for testing i.e continuous testing
of an application as new functionality is added; Application functionality and modules
should be independent enough to test separately. done by programmers or by testers.

Integration testing - Testing of integrated modules to verify combined functionality after


integration. Modules are typically code modules, individual applications, client and
server applications on a network, etc. This type of testing is especially relevant to
client/server and distributed systems.

Functional testing - This type of testing ignores the internal parts and focus on the
output is as per requirement or not. Black-box type testing geared to functional
requirements of an application.

System testing - Entire system is tested as per the requirements. Black-box type testing
that is based on overall requirements specifications, covers all combined parts of a
system.

End-to-end testing - Similar to system testing, involves testing of a complete application


environment in a situation that mimics real-world use, such as interacting with a
database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.

Sanity testing - Testing to determine if a new software version is performing well


enough to accept it for a major testing effort. If application is crashing for initial use then
system is not stable enough for further testing and build or application is assigned to fix.

Regression testing - Testing the application as a whole for the modification in any
module or functionality. Difficult to cover all the system in regression testing so typically
automation tools are used for these testing types.
Acceptance testing -Normally this type of testing is done to verify if system meets the
customer specified requirements. User or customer do this testing to determine whether
to accept application.

Load testing - Its a performance testing to check system behavior under load. Testing an
application under heavy loads, such as testing of a web site under a range of loads to
determine at what point the system’s response time degrades or fails.

Stress testing - System is stressed beyond its specifications to check how and when it
fails. Performed under heavy load like putting large number beyond storage capacity,
complex database queries, continuous input to system or database load.

Performance testing - Term often used interchangeably with ’stress’ and ‘load’ testing.
To check whether system meets performance requirements. Used different performance
and load tools to do this.

Usability testing - User-friendliness check. Application flow is tested, Can new user
understand the application easily, Proper help documented whenever user stuck at any
point. Basically system navigation is checked in this testing.

Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on


different operating systems under different hardware, software environment.

Recovery testing - Testing how well a system recovers from crashes, hardware failures,
or other catastrophic problems.

Security testing - Can system be penetrated by any hacking way. Testing how well the
system protects against unauthorized internal or external access. Checked if system,
database is safe from external attacks.

Compatibility testing - Testing how well software performs in a particular


hardware/software/operating system/network environment and different combination s of
above.

Comparison testing - Comparison of product strengths and weaknesses with previous


versions or other similar products.

Alpha testing - In house virtual user environment can be created for this type of testing.
Testing is done at the end of development. Still minor design changes may be made as a
result of such testing.

Beta testing - Testing typically done by end-users or others. Final testing before releasing
application for commercial purpose.
A sample testing cycle

Although variations exist between organizations, there is a typical cycle for testing[30]:

• Requirements analysis: Testing should begin in the requirements phase of the


software development life cycle. During the design phase, testers work with
developers in determining what aspects of a design are testable and with what
parameters those tests work.
• Test planning: Test strategy, test plan, testbed creation. A lot of activities will be
carried out during testing, so that a plan is needed.
• Test development: Test procedures, test scenarios, test cases, test datasets, test
scripts to use in testing software.
• Test execution: Testers execute the software based on the plans and tests and
report any errors found to the development team.
• Test reporting: Once testing is completed, testers generate metrics and make final
reports on their test effort and whether or not the software tested is ready for
release.
• Test result analysis: Or Defect Analysis, is done by the development team usually
along with the client, in order to decide what defects should be treated, fixed,
rejected (i.e. found software working properly) or deferred to be dealt with at a
later time.
• Retesting the resolved defects. Once a defect has been dealt with by the
development team, it is retested by the testing team.
• Regression testing: It is common to have a small test program built of a subset of
tests, for each integration of new, modified or fixed software, in order to ensure
that the latest delivery has not ruined anything, and that the software product as a
whole is still working correctly.
• Test Closure:Once the test meets the exit criteria, the activities such as capturing
the key outputs, lessons learned, results, logs, documents related to the project are
archived and used as a reference for future projects.

Life cycle of Bug:

1) Log new defect


When tester logs any new bug the mandatory fields are:
Build version, Submit On, Product, Module, Severity, Synopsis and Description to
Reproduce

In above list you can add some optional fields if you are using manual Bug submission
template:
These Optional Fields are: Customer name, Browser, Operating system, File Attachments
or screenshots.
The following fields remain either specified or blank:
If you have authority to add bug Status, Priority and ‘Assigned to’ fields them you can
specify these fields. Otherwise Test manager will set status, Bug priority and assign the
bug to respective module owner.

Look at the following Bug life cycle:

[Click on the image to view full size] Ref: Bugzilla bug life cycle

The figure is quite complicated but when you consider the significant steps in bug life
cycle you will get quick idea of bug life.

On successful logging the bug is reviewed by Development or Test manager. Test


manager can set the bug status as Open, can Assign the bug to developer or bug may be
deferred until next release.

When bug gets assigned to developer and can start working on it. Developer can set bug
status as won’t fix, Couldn’t reproduce, Need more information or ‘Fixed’.

If the bug status set by developer is either ‘Need more info’ or Fixed then QA responds
with specific action. If bug is fixed then QA verifies the bug and can set the bug status as
verified closed or Reopen.

Bug status description:


These are various stages of bug life cycle. The status caption may vary depending on the
bug tracking system you are using.

1) New: When QA files new bug.

2) Deferred: If the bug is not related to current build or can not be fixed in this release or
bug is not important to fix immediately then the project manager can set the bug status as
deferred.

3) Assigned: ‘Assigned to’ field is set by project lead or manager and assigns bug to
developer.

4) Resolved/Fixed: When developer makes necessary code changes and verifies the
changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.
5) Could not reproduce: If developer is not able to reproduce the bug by the steps given
in bug report by QA then developer can mark the bug as ‘CNR’. QA needs action to
check if bug is reproduced and can assign to developer with detailed reproducing steps.

6) Need more information: If developer is not clear about the bug reproduce steps
provided by QA to reproduce the bug, then he/she can mark it as “Need more
information’. In this case QA needs to add detailed reproducing steps and assign bug
back to dev for fix.

7) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix
then QA can mark it as ‘Reopen’ so that developer can take appropriate action.

8 ) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved
then QA can mark bug as ‘Closed’.

9) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected or
invalid if the system is working according to specifications and bug is just due to some
misinterpretation.

Das könnte Ihnen auch gefallen