Sie sind auf Seite 1von 30

4.

Introduction to Testing

What is testing?

Testing is the execution of a system in a real or simulated environment


with the intent of finding faults.

Objectives of Testing

Finding of Errors - Primary Goal


Trying to prove that software does not work. Thus, indirectly verifying that
software meets requirements

What is Software Testing?

Software testing is the process of testing the functionality and correctness of


software by running it. Software testing is usually performed for one of two
reasons:
(1) defect detection
(2) reliability or Process of executing a computer program and comparing the
actual behavior with the expected behavior

What is the goal of Software Testing?

* Demonstrate That Faults Are Not Present


* Find Errors
* Ensure That All The Functionality Is Implemented
* Ensure The Customer Will Be Able To Get His Work Done

Modes of Testing

* Static Analysis doesn’t involve actual program execution. The code is


examined, it is tested without being executed Ex: - Reviews
* Dynamic In Dynamic, The code is executed. Ex:- Unit testing

Testing methods
* White box testing Use the control structure of the procedural design to derive
test cases.
* Black box testing Derive sets of input conditions that will fully exercise the
functional requirements for a program.
* Integration Assembling parts of a system
Verification and Validation

* Verification: Are we doing the job right? The set of activities that ensure that
software correctly implements a specific function. (i.e. The process of
determining whether or not products of a given phase of the software
development cycle fulfill the requirements established during previous phase).
Ex: - Technical reviews, quality & configuration audits, performance
monitoring, simulation, feasibility study, documentation review, database
review, algorithm analysis etc

* Validation: Are we doing the right job? The set of activities that ensure that
the software that has been built is traceable to customer requirements.(An
attempt to find errors by executing the program in a real environment ). Ex: -
Unit testing, system testing and installation testing etc

What is a Test Case?

A test case is a noted/documented set of steps/activities that are


carried out or executed on the software in order to confirm its
functionality/behavior to certain set of inputs.

What is a software error?


A mismatch between the program and its specification is an error in the
program if and only if the specifications exists and is correct.

Why we use testing techniques?

Testing Techniques can be used in order to obtain a structured and


efficient testing which covers the testing objectives during the different phases
in the software life cycle.
5. Testing Types
There are different types of testing available in software testing. The
testing terminology also includes in the below definitions.

Those are:

Acceptance Testing: Testing conducted to enable a user/customer to


determine whether to accept a software product. Normally performed to
validate the software meets a set of agreed acceptance criteria.

Accessibility Testing: Verifying a product is accessible to the people having


disabilities (deaf, blind, mentally disabled etc.).

Ad Hoc Testing: Similar to exploratory testing, but often taken to mean that
the testers have significant understanding of the software before testing it.

Alpha testing: Testing of an application when development is nearing


completion; minor design changes may still be made as a result of such testing.
Typically done by end-users or others, not by programmers or testers.

Application Programming Interface (API): A formalized set of software calls


and routines that can be referenced by an application program in order to
access supporting system or network services.

Automated Testing:
· Testing employing software tools which execute tests without manual
intervention. Can be applied in GUI, performance, API, etc. testing.
· The use of software to control the execution of tests, the comparison of
actual outcomes to predicted outcomes, the setting up of test preconditions,
and other test control and test reporting functions.
Click this Link for : Approaches for automated testing tools

Basis Path Testing: A white box test case design technique that uses the
algorithmic flow of the program to design tests.

Baseline: The point at which some deliverable produced during the software
engineering process is put under formal change control.

Beta Testing: Testing when development and testing are essentially completed
and final bugs and problems need to be found before final release. Typically
done by end-users or others, not by programmers or testers.
Black Box Testing: Testing based on an analysis of the specification of a piece
of software without reference to its internal workings. The goal is to test how
well the component conforms to the published requirements for the
component.

Bottom Up Testing: An approach to integration testing where the lowest level


components are tested first, then used to facilitate the testing of higher level
components. The process is repeated until the component at the top of the
hierarchy is tested.
Boundary Testing: Test which focus on the boundary or limit conditions of the
software being tested. (Some of these tests are stress tests).

Bug: A fault in a program which causes the program to perform in an


unintended or unanticipated manner.

Boundary Value Analysis: BVA is similar to Equivalence Partitioning but focuses


on "corner cases" or values that are usually out of range as defined by the
specification. It means that if a function expects all values in range of negative
100 to positive 1000, test inputs would include negative 101 and positive 1001.

Branch Testing : Testing in which all branches in the program source code are
tested at least once.

CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model
for judging the maturity of the software processes of an organization and for
identifying the key practices that are required to increase the maturity of
these processes.
Cause Effect Graph: A graphical representation of inputs and the associated
outputs effects which can be used to design test cases.

Code Complete: Phase of development where functionality is implemented in


entirety; bug fixes are all that are left. All functions found in the Functional
Specifications have been implemented.

Code Coverage: An analysis method that determines which parts of the


software have been executed (covered) by the test case suite and which parts
have not been executed and therefore may require additional attention.

Code Inspection: A formal testing technique where the programmer reviews


source code with a group who ask questions analyzing the program logic,
analyzing the code with respect to a checklist of historically common
programming errors, and analyzing its compliance with coding standards.

Code Walkthrough: A formal testing technique where source code is traced by


a group with a small set of test cases, while the state of program variables is
manually monitored, to analyze the programmer's logic and assumptions.
Compatibility Testing: Testing whether software is compatible with other
elements of a system with which it should operate, e.g. browsers, Operating
Systems, or hardware.

Component: A minimal software item for which a separate specification is


available.

Cyclomatic Complexity: A measure of the logical complexity of an algorithm,


used in white-box testing.

Data Flow Diagram: A modeling notation that represents a functional


decomposition of a system.

Data Driven Testing: Testing in which the action of a test case is


parameterized by externally defined data values, maintained as a file or
spreadsheet. A common technique in Automated Testing.

Dependency Testing: Examines an application's requirements for pre-existing


software, initial states and configuration in order to maintain proper
functionality.

Depth Testing: A test that exercises a feature of a product in full detail.

Dynamic Testing: Testing software through executing it

Emulator: A device, computer program, or system that accepts the same


inputs and produces the same outputs as a given system.

Endurance Testing: Checks for memory leaks or other problems that may occur
with prolonged execution.

End-to-End testing: Testing a complete application environment in a situation


that mimics real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or systems if
appropriate.

Equivalence Partitioning: A test case design technique for a component in


which test cases are designed to execute representatives from equivalence
classes.

Functional Specification: A document that describes in detail the


characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.
· Testing the features and operational behavior of a product to ensure they
correspond to its specifications.
· Testing that ignores the internal mechanism of a system or component and
focuses solely on the outputs generated in response to selected inputs and
execution conditions.

Glass Box Testing: A synonym for White Box Testing.

Gorilla Testing: Testing one particular module, functionality heavily.

Gray Box Testing: A combination of Black Box and White Box testing
methodologies: testing a piece of software against its specification but using
some knowledge of its internal workings.

Incremental Integration testing: Continuous testing of an application as new


functionality is added; requires that various aspects of an application's
functionality be independent enough to work separately before all parts of the
program are completed, or that test drivers be developed as needed; done by
programmers or by testers.

Inspection: A group review quality improvement process for written material.


It consists of two aspects; product (document itself) improvement and process
improvement (of both document production and inspection).

Integration Testing: Testing of combined parts of an application to determine


if they function together correctly. Usually performed after unit and functional
testing. This type of testing is especially relevant to client/server and
distributed systems.

Installation Testing: Confirms that the application under test recovers from
expected or unexpected events without loss of data or functionality. Events
can include shortage of disk space, unexpected loss of communication, or
power out conditions.

Load Testing: Load Tests are end to end performance tests under anticipated
production load. The primary objective of this test is to determine the
response times for various time critical transactions and business processes and
that they are within documented expectations (or Service Level Agreements -
SLAs). The test also measures the capability of the application to function
correctly under load, by measuring transaction pass/fail/error rates.
This is a major test, requiring substantial input from the business, so that
anticipated activity can be accurately simulated in a test situation. If the
project has a pilot in production then logs from the pilot can be used to
generate ‘usage profiles’ that can be used as part of the testing process, and
can even be used to ‘drive’ large portions of the Load Test.
Load testing must be executed on “today’s” production size database, and
optionally with a “projected” database. If some database tables will be much
larger in some months time, then Load testing should also be conducted against
a projected database. It is important that such tests are repeatable as they
may need to be executed several times in the first year of wide scale
deployment, to ensure that new releases and changes in database size do not
push response times beyond prescribed SLAs.
See Performance Testing also.

Loop Testing: A white box testing technique that exercises program loops.

Metric: A standard of measurement. Software metrics are the statistics


describing the structure or content of a program. A metric should be a real
objective measurement of something such as number of bugs per lines of code.

Monkey Testing: Testing a system or an Application on the fly, i.e. just few
tests here and there to ensure the system or an application does not crash out.

Mutation testing: A method for determining if a set of test data or test cases is
useful, by deliberately introducing various code changes ('bugs') and retesting
with the original test data/cases to determine if the 'bugs' are detected. Proper
implementation requires large computational resources.

Negative Testing: Testing aimed at showing software does not work. Also
known as "test to fail".

N+1 Testing: A variation of Regression Testing. Testing conducted with


multiple cycles in which errors found in test cycle N are resolved and the
solution is retested in test cycle N+1. The cycles are typically repeated until
the solution reaches a steady state and there are no errors. See also Regression
Testing.

Performance Testing: Testing conducted to evaluate the compliance of a


system or component with specified performance requirements. Often this is
performed using an automated test tool to simulate large number of users. Also
know as "Load Testing".
Performance Tests are tests that determine end to end timing (benchmarking)
of various time critical business processes and transactions, while the system is
under low load, but with a production sized database. This sets ‘best possible’
performance expectation under a given configuration of infrastructure. It also
highlights very early in the testing process if changes need to be made before
load testing should be undertaken.

For example, a customer search may take 15 seconds in a full sized database if
indexes had not been applied correctly, or if an SQL 'hint' was incorporated in a
statement that had been optimized with a much smaller database. Such
performance testing would highlight such a slow customer search transaction,
which could be remediate prior to a full end to end load test.

Positive Testing: Testing aimed at showing software works. Also known as "test
to pass".

Quality Assurance: All those planned or systematic actions necessary to


provide adequate confidence that a product or service is of the type and
quality needed and expected by the customer.

Quality Audit: A systematic and independent examination to determine


whether quality activities and related results comply with planned
arrangements and whether these arrangements are implemented effectively
and are suitable to achieve objectives.

Quality Control: The operational techniques and the activities used to fulfill
and verify requirements of quality.

Quality Management: That aspect of the overall management function that


determines and implements the quality policy.

Quality Policy: The overall intentions and direction of an organization as


regards quality as formally expressed by top management.

Quality System: The organizational structure, responsibilities, procedures,


processes, and resources for implementing quality management.

Recovery Testing: Confirms that the program recovers from expected or


unexpected events without loss of data or functionality. Events can include
shortage of disk space, unexpected loss of communication, or power out
conditions.

Regression Testing: Retesting a previously tested program following


modification to ensure that faults have not been introduced or uncovered as a
result of the changes made.

Sanity Testing: Brief test of major functional elements of a piece of software


to determine if it’s basically operational. See also Smoke Testing.
Scalability Testing: Performance testing focused on ensuring the application
under test gracefully handles increases in work load.

Security Testing: Testing which confirms that the program can restrict access
to authorized personnel and that the authorized personnel can access the
functions available to their security level.
Smoke Testing: Typically an initial testing effort to determine if a new
software version is performing well enough to accept it for a major testing
effort. For example, if the new software is crashing systems every 5 minutes,
bogging down systems to a crawl, or corrupting databases, the software may
not be in a 'sane' enough condition to warrant further testing in its current
state.

Sociability (sensitivity) Tests: Sensitivity analysis testing can determine


impact of activities in one system on another related system. Such testing
involves a mathematical approach to determine the impact that one system
will have on another system. For example, web enabling a customer 'order
status' facility may impact on performance of telemarketing screens that
interrogate the same tables in the same database. The issue of web enabling
can be that it is more successful than anticipated and can result in many more
enquiries than originally envisioned, which loads the IT systems with more work
than had been planned.

Static Analysis: Analysis of a program carried out without executing the


program.

Static Testing: Analysis of a program carried out without executing the


program.

Storage Testing: Testing that verifies the program under test stores data files
in the correct directories and that it reserves sufficient space to prevent
unexpected termination resulting from lack of space. This is external storage as
opposed to internal storage.

Stress Testing: Stress Tests determine the load under which a system fails, and
how it fails. This is in contrast to Load Testing, which attempts to simulate
anticipated load. It is important to know in advance if a ‘stress’ situation will
result in a catastrophic system failure, or if everything just “goes really slow”.
There are various varieties of Stress Tests, including spike, stepped and gradual
ramp-up tests. Catastrophic failures require restarting various infrastructures
and contribute to downtime, a stress-full environment for support staff and
managers, as well as possible financial losses. This test is one of the most
fundamental load and performance tests.

Structural Testing: Testing based on an analysis of internal workings and


structure of a piece of software. See also White Box Testing.

System Testing: Testing that attempts to discover defects that are properties
of the entire system rather than of its individual components. It’s a black-box
type testing that is based on overall requirements specifications; covers all
combined parts of a system.
Testing:
· The process of exercising software to verify that it satisfies specified
requirements and to detect errors.
· The process of analyzing a software item to detect the differences between
existing and required conditions (that is, bugs), and to evaluate the features of
the software item.
· The process of operating a system or component under specified conditions,
observing or recording the results, and making an evaluation of some aspect of
the system or component.
Test Automation: See Automated Testing.

Test Case:
· Test Case is a commonly used term for a specific test. This is usually the
smallest unit of testing. A Test Case will consist of information such as
requirements testing, test steps, verification steps, prerequisites, outputs, test
environment, etc.
· A set of inputs, execution preconditions, and expected outcomes developed
for a particular objective, such as to exercise a particular program path or to
verify compliance with a specific requirement.

Test Environment: The hardware and software environment in which tests will
be run, and any other software with which the software under test interacts
when under test including stubs and test drivers.

Test Harness: A program or test tool used to execute a test. Also known as a
Test Driver.

Test Plan: A document describing the scope, approach, resources, and


schedule of intended testing activities. It identifies test items, the features to
be tested, the testing tasks, who will do each task, and any risks requiring
contingency planning.

Test Procedure: A document providing detailed instructions for the execution


of one or more test cases.

Test Script: Commonly used to refer to the instructions for a particular test
that will be carried out by an automated test tool.

Test Specification: A document specifying the test approach for a software


feature or combination or features and the inputs, predicted results and
execution conditions for the associated tests.

Test Suite: A collection of tests used to validate the behavior of a product. The
scope of a Test Suite varies from organization to organization. There may be
several Test Suites for a particular product for example. In most cases however
a Test Suite is a high level concept, grouping together hundreds or thousands of
tests related by what they are intended to test.

Test Tools: Computer programs used in the testing of a system, a component


of the system, or its documentation.

Top Down Testing: An approach to integration testing where the component at


the top of the component hierarchy is tested first, with lower level components
being simulated by stubs. Tested components are then used to test lower level
components. The process is repeated until the lowest level components have
been tested.

Traceability Matrix: A document showing the relationship between Test


Requirements and Test Cases.

Usability Testing: Testing the ease with which users can learn and use a
product.

Use Case: The specification of tests that are conducted from the end-user
perspective. Use cases tend to focus on operating software as an end-user
would conduct their day-to-day activities.

User Acceptance Testing: A formal product evaluation performed by a


customer as a condition of purchase.

Unit Testing: The most 'micro' scale of testing; to test particular functions or
code modules. Typically done by the programmer and not by testers, as it
requires detailed knowledge of the internal program design and code. Not
always easily done unless the application has a well-designed architecture with
tight code; may require developing test driver modules or test harnesses.

Validation: The process of evaluating software at the end of the software


development process to ensure compliance with software requirements. The
techniques for validation are testing, inspection and reviewing.

Verification: The process of determining whether of not the products of a


given phase of the software development cycle meet the implementation steps
and can be traced to the incoming objectives established during the previous
phase. The techniques for verification are testing, inspection and reviewing.

Walkthrough: A review of requirements, designs or code characterized by the


author of the material under review guiding the progression of the review.

White Box Testing: Testing based on an analysis of internal workings and


structure of a piece of software. Includes techniques such as Branch Testing
and Path Testing. Also known as Structural Testing and Glass Box Testing.
Contrast with Black Box Testing.
6. Testing Process

The testing process includes the Test planning, Test design and Test execution.

The Structured Approach to Testing is:

Test Planning
* Define what to test
* Identify Functions to be tested
* Test conditions
* Manual or Automated
* Prioritize to identify Most Important Tests
* Record Document References

Test Design
* Define how to test
* Identify Test Specifications
* Build detailed test scripts
* Quick Script generation
* Documents

Test Execution
* Define when to test
* Build test execution schedule
* Record test results

Testing Roles:
As in any organization or organized endeavor there are Roles that must be
fulfilled within any testing organization. The requirement for any given role
depends on the size, complexity, goals, and maturity of the testing
organization. These are roles, so it is quite possible that one person could
fulfill many roles within the testing organization.

Test Lead or Test Manager

The Role of Test Lead / Manager is to effectively lead the testing team. To
fulfill this role the Lead must understand the discipline of testing and how to
effectively implement a testing process while fulfilling the traditional
leadership roles of a manager. What does this mean? The manager must
manage and implement or maintain an effective testing process.
Test Architect

The Role of the Test Architect is to formulate an integrated test architecture


that supports the testing process and leverages the available testing
infrastructure. To fulfill this role the Test Architect must have a clear
understanding of the short-term and long-term goals of the organization, the
resources (both hard and soft) available to the organization and a clear vision
on how to most effectively deploy these assets to form integrated test
architecture.

Test Designer or Tester

The Role of the Test Designer / Tester is to: design and document test cases,
execute tests, record test results, document defects, and perform test
coverage analysis. To fulfill this role the designer must be able to apply the
most appropriate testing techniques to test the application as efficiently as
possible while meeting the test organizations testing mandate.

Test Automation Engineer

The Role of the Test Automation Engineer to is to create automated test case
scripts that perform the tests as designed by the Test Designer. To fulfill this
role the Test Automation Engineer must develop and maintain an effective test
automation infrastructure using the tools and techniques available to the
testing organization. The Test Automation Engineer must work in concert with
the Test Designer to ensure the appropriate automation solution is being
deployed.

Test Methodologist or Methodology Specialist

The Role of the Test Methodologist is to provide the test organization with
resources on testing methodologies. To fulfill this role the Methodologist works
with Quality Assurance to facilitate continuous quality improvement within the
testing methodology and the testing organization as a whole. To this end the
methodologist: evaluates the test strategy, provides testing frameworks and
templates, and ensures effective implementation of the appropriate testing
techniques.
7. Test Development Life Cycle

The Software test development Life cycle involves the below steps.

1. Requirements Analysis: Testing should begin in the requirements phase


of the software development life cycle.
2. Design Analysis: During the design phase, testers work with developers
in determining what aspects of a design are testable and under what
parameter those tests work.
3. Test Planning: Test Strategy, Test Plan(s) creation.
4. Test Development: Test Procedures, Test Scenarios, Test Cases, Test
Scripts to use in testing software.
5. Test Execution: Testers execute the software based on the plans and
tests and report any errors found to the development team.
6. Test Reporting: Once testing is completed, testers generate metrics and
make final reports on their test effort and whether or not the software
tested is ready for release.
7. Retesting the Defects
A. Test Plan Document

What is a test plan?

A software project test plan is a document that describes the objectives,


scope, approach, and focus of a software testing effort. The process of
preparing a test plan is a useful way to think through the efforts needed to
validate the acceptability of a software product.

Elements of test planning

* Establish objectives for each test phase


* Establish schedules for each test activity
* Determine the availability of tools, resources
* Establish the standards and procedures to be used for planning and
conducting the tests and reporting test results
* Set the criteria for test completion as well as for the success of each test

What a test plan should contain?

A software project test plan is a document that describes the objectives,


scope, approach, and focus of a software testing effort. The process of
preparing a test plan is a useful way to think through the efforts needed to
validate the acceptability of a software product. The completed document will
help people outside the test group understand the 'why' and 'how' of product
validation. It should be thorough enough to be useful but not so thorough that
no one outside the test group will read it.

A test plan states what the items to be tested are, at what level they will be
tested, what sequence they are to be tested in, how the test strategy will be
applied to the testing of each item, and describes the test environment.

A test plan should ideally be organization wide, being applicable to all of


organizations software developments.

The objective of each test plan is to provide a plan for verification, by testing
the software, the software produced fulfils the functional or design statements
of the appropriate software specification. In the case of acceptance testing
and system testing, this generally means the Functional Specification.
The following are some of the items that might be included in a test plan,
depending on the particular project:

What to consider for the Test Plan:


1. Test Plan Identifier
2. References
3. Introduction
4. Test Items
5. Software Risk Issues
6. Features to be Tested
7. Features not to be Tested
8. Approach 9. Item Pass/Fail Criteria
10. Suspension Criteria and Resumption Requirements
11. Test Deliverables
12. Remaining Test Tasks
13. Environmental Needs
14. Staffing and Training Needs
15. Responsibilities
16. Schedule
17. Planning Risks and Contingencies
18. Approvals
19. Glossary
B. Test Case format:
The test case format contains the below format.

Test Executio Test Case Prerequisite Testing Expected Actual Status


Case n Type Description Steps Result Value
Id

Structure of test case:

Formal, written test cases consist of three main parts with subsections:

• Introduction/overview contains general information about Test case.


o Identifier is unique identifier of test case for further references,
for example, while describing found defect.
o Test case owner/creator is name of tester or test designer, who
created test or is responsible for its development
o Version of current Test case definition
o Name of test case should be human-oriented title which allows to
quickly understand test case purpose and scope.
o Identifier of requirement which is covered by test case. Also here
could be identifier of use case or functional specification item.
o Purpose contains short description of test purpose, what
functionality it checks.
o Dependencies
• Test case activity
o Testing environment/configuration contains information about
configuration of hardware or software which must be met while
executing test case
o Initialization describes actions, which must be performed before
test case execution is started. For example, we should open some
file.
o Finalization describes actions to be done after test case is
performed. For example if test case crashes database, tester
should restore it before other test cases will be performed.
o Actions step by step to be done to complete test.
o Input data description
• Results
o Expected results contains description of what tester should see
after all test steps has been completed
o Actual results contains a brief description of what the tester saw
after the test steps has been completed. This is often replaced
with a Pass/Fail. Quite often if a test case fails, reference to the
defect involved should be listed in this column.
C. Test Case Execution:
After test cases execution the results will come. The results includes the below
levels

The five Levels are:

1. Critical - The defect results in the failure of the complete software system,
of a subsystem, or of a software unit (program or module) within the system.

2. Major - The defect results in the failure of the complete software system, of
a subsystem, or of a software unit (program or module) within the system.
There is no way to make the failed component(s), however, there are
acceptable processing alternatives which will yield the desired result.

3. Average - The defect does not result in a failure, but causes the system to
produce incorrect, incomplete, or inconsistent results, or the defect impairs
the systems usability.

4. Minor - The defect does not cause a failure, does not impair usability, and
the desired processing results are easily obtained by working around the
defect.

5. Exception - The defect is the result of non-conformance to a standard, is


related to the aesthetics of the system, or is a request for an enhancement.
Defects at this level may be deferred or even ignored.
In addition to the defect severity level defined above, defect priority level can
be used with severity categories to determine the immediacy of repair.
8. Bug Life Cycle:
In software development process, the bug has a life cycle. The bug
should go through the life cycle to be closed. A specific life cycle ensures that
the process is standardized. The bug attains different states in the life cycle.
The life cycle of the bug can be shown diagrammatically as follows:

The different states of a bug can be summarized as follows:

1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed

Description of Various Stages:

1. New: When the bug is posted for the first time, its state will be “NEW”. This
means that the bug is not yet approved.
2. Open: After a tester has posted a bug, the lead of the tester approves that
the bug is genuine and he changes the state as “OPEN”.

3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to
corresponding developer or developer team. The state of the bug now is
changed to “ASSIGN”.

4. Test: Once the developer fixes the bug, he has to assign the bug to the
testing team for next round of testing. Before he releases the software with
bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has
been fixed and is released to testing team.

5. Deferred: The bug, changed to deferred state means the bug is expected to
be fixed in next releases. The reasons for changing the bug to this state have
many factors. Some of them are priority of the bug may be low, lack of time
for the release or the bug may not have major effect on the software.

6. Rejected: If the developer feels that the bug is not genuine, he rejects the
bug. Then the state of the bug is changed to “REJECTED”.

7. Duplicate: If the bug is repeated twice or the two bugs mention the same
concept of the bug, then one bug status is changed to “DUPLICATE”.

8. Verified: Once the bug is fixed and the status is changed to “TEST”, the
tester tests the bug. If the bug is not present in the software, he approves that
the bug is fixed and changes the status to “VERIFIED”.

9. Reopened: If the bug still exists even after the bug is fixed by the
developer, the tester changes the status to “REOPENED”. The bug traverses the
life cycle once again.

10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels
that the bug no longer exists in the software, he changes the status of the bug
to “CLOSED”. This state means that the bug is fixed, tested and approved.

Guidelines on deciding the Severity of Bug:

Indicate the impact each defect has on testing efforts or users and
administrators of the application under test. This information is used by
developers and management as the basis for assigning priority of work on
defects.

A sample guideline for assignment of Priority Levels during the product test
phase includes:
1. Critical / Show Stopper — An item that prevents further testing
of the product or function under test can be classified as Critical Bug.
No workaround is possible for such bugs. Examples of this include a
missing menu option or security permission required to access a
function under test.
.
2. Major / High — A defect that does not function as
expected/designed or cause other functionality to fail to meet
requirements can be classified as Major Bug. The workaround can be
provided for such bugs. Examples of this include inaccurate
calculations; the wrong field being updated, etc.
.
3. Average / Medium — The defects which do not conform to
standards and conventions can be classified as Medium Bugs. Easy
workarounds exists to achieve functionality objectives. Examples
include matching visual and text links which lead to different end
points.
.
4. Minor / Low — Cosmetic defects which does not affect the
functionality of the system can be classified as Minor Bugs.

Defect Severity and Defect Priority

This document defines the defect Severity scale for determining defect
criticality and the associated defect Priority levels to be assigned to errors
found in software. It is a scale which can be easily adapted to other automated
test management tools.
A few simple definitions of Testing Terminology.

Error - The difference between a computed, observed, or measured value or


condition and the true, specified, or theoretically correct value or condition.

Fault - An incorrect step, process, or data definition in a computer program.

Debug - To detect, locate, and correct faults in a computer program.

Failure - The inability of a system or component to perform its required


functions within specified performance requirements. It is manifested as a
fault.

Testing - The process of analyzing a software item to detect the differences


between existing and required conditions (that is, bugs) and to evaluate the
features of the software items.

Static analysis - The process of evaluating a system or component based on its


form, structure, content, or documentation.

Dynamic analysis - The process of evaluating a system or component based on


its behavior during execution.

Correctness - The degree to which a system or component is free from faults in


its specification, design, and implementation. The degree to which software,
documentation, or other items meet specified requirements. The degree to
which software, documentation, or other items meet user needs and
expectations, whether specified or not.

Verification - The process of evaluating a system or component to determine


whether the products of a given development phase satisfy the conditions
imposed at the start of that phase. Formal proof of program correctness.

Validation - The process of evaluating a system or component during or at the


end of the development process to determine whether it satisfies specified
requirements.
Why does software have bugs?

• miscommunication or no communication - as to specifics of what an


application should or shouldn't do (the application's requirements).
• software complexity - the complexity of current software applications
can be difficult to comprehend for anyone without experience in
modern-day software development. Multi-tiered applications, client-
server and distributed applications, data communications, enormous
relational databases, and sheer size of applications have all contributed
to the exponential growth in software/system complexity.
• programming errors - programmers, like anyone else, can make
mistakes.

changing requirements (whether documented or undocumented) - the end-user


may not understand the effects of changes, or may understand and request
them anyway - redesign, rescheduling of engineers, effects on other projects,
work already completed that may have to be redone or thrown out, hardware
requirements that may be affected, etc. If there are many minor changes or
any major changes, known and unknown dependencies among parts of the
project are likely to interact and cause problems, and the complexity of
coordinating changes may result in errors.

What kinds of testing should be considered?

• Black box testing - not based on any knowledge of internal design or


code. Tests are based on requirements and functionality.
• White box testing - based on knowledge of the internal logic of an
application's code. Tests are based on coverage of code statements,
branches, paths, conditions.
• unit testing - the most 'micro' scale of testing; to test particular
functions or code modules. Typically done by the programmer and not by
testers, as it requires detailed knowledge of the internal program design
and code. Not always easily done unless the application has a well-
designed architecture with tight code; may require developing test
driver modules or test harnesses.
• incremental integration testing - continuous testing of an application as
new functionality is added; requires that various aspects of an
application's functionality be independent enough to work separately
before all parts of the program are completed, or that test drivers be
developed as needed; done by programmers or by testers.
• integration testing - testing of combined parts of an application to
determine if they function together correctly. The 'parts' can be code
modules, individual applications, client and server applications on a
network, etc. This type of testing is especially relevant to client/server
and distributed systems.
• functional testing - black-box type testing geared to functional
requirements of an application; this type of testing should be done by
testers. This doesn't mean that the programmers shouldn't check that
their code works before releasing it (which of course applies to any
stage of testing.)
• system testing - black-box type testing that is based on overall
requirements specifications; covers all combined parts of a system.
• end-to-end testing - similar to system testing; the 'macro' end of the test
scale; involves testing of a complete application environment in a
situation that mimics real-world use, such as interacting with a
database, using network communications, or interacting with other
hardware, applications, or systems if appropriate.
• sanity testing or smoke testing - typically an initial testing effort to
determine if a new software version is performing well enough to accept
it for a major testing effort. For example, if the new software is
crashing systems every 5 minutes, bogging down systems to a crawl, or
corrupting databases, the software may not be in a 'sane' enough
condition to warrant further testing in its current state.
• regression testing - re-testing after fixes or modifications of the
software or its environment. It can be difficult to determine how much
re-testing is needed, especially near the end of the development cycle.
Automated testing tools can be especially useful for this type of testing.
• acceptance testing - final testing based on specifications of the end-user
or customer, or based on use by end-users/customers over some limited
period of time.
• load testing - testing an application under heavy loads, such as testing of
a web site under a range of loads to determine at what point the
system's response time degrades or fails.
• stress testing - term often used interchangeably with 'load' and
'performance' testing. Also used to describe such tests as system
functional testing while under unusually heavy loads, heavy repetition of
certain actions or inputs, input of large numerical values, large complex
queries to a database system, etc.
• performance testing - term often used interchangeably with 'stress' and
'load' testing. Ideally 'performance' testing (and any other 'type' of
testing) is defined in requirements documentation or QA or Test Plans.
• usability testing - testing for 'user-friendliness'. Clearly this is subjective,
and will depend on the targeted end-user or customer. User interviews,
surveys, video recording of user sessions, and other techniques can be
used. Programmers and testers are usually not appropriate as usability
testers.
• install/uninstall testing - testing of full, partial, or upgrade
install/uninstall processes.
• recovery testing - testing how well a system recovers from crashes,
hardware failures, or other catastrophic problems.
• failover testing - typically used interchangeably with 'recovery testing'
• security testing - testing how well the system protects against
unauthorized internal or external access, willful damage, etc; may
require sophisticated testing techniques.
• compatability testing - testing how well software performs in a
particular hardware/software/operating system/network/etc.
environment.
• exploratory testing - often taken to mean a creative, informal software
test that is not based on formal test plans or test cases; testers may be
learning the software as they test it.
• ad-hoc testing - similar to exploratory testing, but often taken to mean
that the testers have significant understanding of the software before
testing it.
• context-driven testing - testing driven by an understanding of the
environment, culture, and intended use of software. For example, the
testing approach for life-critical medical equipment software would be
completely different than that for a low-cost computer game.
• user acceptance testing - determining if software is satisfactory to an
end-user or customer.
• comparison testing - comparing software weaknesses and strengths to
competing products.
• alpha testing - testing of an application when development is nearing
completion; minor design changes may still be made as a result of such
testing. Typically done by end-users or others, not by programmers or
testers.
• beta testing - testing when development and testing are essentially
completed and final bugs and problems need to be found before final
release. Typically done by end-users or others, not by programmers or
testers.
• mutation testing - a method for determining if a set of test data or test
cases is useful, by deliberately introducing various code changes ('bugs')
and retesting with the original test data/cases to determine if the 'bugs'
are detected. Proper implementation requires large computational
resources.

What should be done after a bug is found?


The bug needs to be communicated and assigned to developers that can fix it.
After the problem is resolved, fixes should be re-tested, and determinations
made regarding requirements for regression testing to check that fixes didn't
create problems elsewhere. If a problem-tracking system is in place, it should
encapsulate these processes. A variety of commercial problem-
tracking/management software tools are available (see the 'Tools' section for
web resources with listings of such tools). The following are items to consider
in the tracking process:

• Complete information such that developers can understand the bug, get
an idea of it's severity, and reproduce it if necessary.
• Bug identifier (number, ID, etc.)
• Current bug status (e.g., 'Released for Retest', 'New', etc.)
• The application name or identifier and version
• The function, module, feature, object, screen, etc. where the bug
occurred
• Environment specifics, system, platform, relevant hardware specifics
• Test case name/number/identifier
• One-line bug description
• Full bug description
• Description of steps needed to reproduce the bug if not covered by a
test case or if the developer doesn't have easy access to the test
case/test script/test tool
• Names and/or descriptions of file/data/messages/etc. used in test
• File excerpts/error messages/log file excerpts/screen shots/test tool
logs that would be helpful in finding the cause of the problem
• Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is
common)
• Was the bug reproducible?
• Tester name
• Test date
• Bug reporting date
• Name of developer/group/organization the problem is assigned to
• Description of problem cause
• Description of fix
• Code section/file/module/class/method that was fixed
• Date of fix
• Application version that contains the fix
• Tester responsible for retest
• Retest date
• Retest results
• Regression testing requirements
• Tester responsible for regression tests
• Regression testing results

A reporting or tracking process should enable notification of appropriate


personnel at various stages. For instance, testers need to know when retesting
is needed, developers need to know when bugs are found and how to get the
needed information, and reporting/summary capabilities are needed for
managers.
How can it be known when to stop testing?
This can be difficult to determine. Many modern software applications are so
complex, and run in such an interdependent environment, that complete
testing can never be done. Common factors in deciding when to stop are:

• Deadlines (release deadlines, testing deadlines, etc.)


• Test cases completed with certain percentage passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• Bug rate falls below a certain level
• Beta or alpha testing period ends

What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help?

• SEI = 'Software Engineering Institute' at Carnegie-Mellon University;


initiated by the U.S. Defense Department to help improve software
development processes.
• CMM = 'Capability Maturity Model', now called the CMMI ('Capability
Maturity Model Integration'), developed by the SEI. It's a model of 5
levels of process 'maturity' that determine effectiveness in delivering
quality software. It is geared to large organizations such as large U.S.
Defense Department contractors. However, many of the QA processes
involved are appropriate to any organization, and if reasonably applied
can be helpful. Organizations can receive CMMI ratings by undergoing
assessments by qualified auditors.

Level 1 - characterized by chaos, periodic panics, and heroic


efforts required by individuals to successfully
complete projects. Few if any processes in place;
successes may not be repeatable.

Level 2 - software project tracking, requirements management,


realistic planning, and configuration management
processes are in place; successful practices can
be repeated.

Level 3 - standard software development and maintenance processes


are integrated throughout an organization; a Software
Engineering Process Group is is in place to oversee
software processes, and training programs are used to
ensure understanding and compliance.

Level 4 - metrics are used to track productivity, processes,


and products. Project performance is predictable,
and quality is consistently high.

Level 5 - the focus is on continuous process improvement. The


impact of new processes and technologies can be
predicted and effectively implemented when required.

• ISO = 'International Organization for Standardization' - The ISO 9001:2000


standard (which replaces the previous standard of 1994) concerns quality
systems that are assessed by outside auditors, and it applies to many
kinds of production and manufacturing organizations, not just software.
It covers documentation, design, development, production, testing,
installation, servicing, and other processes. The full set of standards
consists of: (a)Q9001-2000 - Quality Management Systems:
Requirements; (b)Q9000-2000 - Quality Management Systems:
Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management
Systems: Guidelines for Performance Improvements. To be ISO 9001
certified, a third-party auditor assesses an organization, and
certification is typically good for about 3 years, after which a complete
reassessment is required. Note that ISO certification does not necessarily
indicate quality products - it indicates only that documented processes
are followed. Also see http://www.iso.org/ for the latest information. In
the U.S. the standards can be purchased via the ASQ web site at
http://e-standards.asq.org/
ISO 9126 defines six high level quality characteristics that can be used in
software evaluation. It includes functionality, reliability, usability,
efficiency, maintainability, and portability.
• IEEE = 'Institute of Electrical and Electronics Engineers' - among other
things, creates standards such as 'IEEE Standard for Software Test
Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software
Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software
Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.
• ANSI = 'American National Standards Institute', the primary industrial
standards body in the U.S.; publishes some software-related standards in
conjunction with the IEEE and ASQ (American Society for Quality).

Other software development/IT management process assessment methods


besides CMMI and ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL,
MOF, and CobiT.

Das könnte Ihnen auch gefallen