Beruflich Dokumente
Kultur Dokumente
by Karthik Ramanathan
1.1. Application
Start Application by Double Clicking on its ICON. The Loading message should show the application name,
version number, and a bigger pictorial representation of the icon (a 'splash' screen).
No Login is necessary
The main window of the application should have the same caption as the caption of the icon in Program Manager.
Closing the application should result in an "Are you Sure" message box
Attempt to start application Twice
This should not be allowed - you should be returned to main Window
Try to start the application twice as it is loading.
On each window, if the application is busy, then the hour glass should be displayed. If there is no hour glass
(e.g. alpha access enquiries) then some enquiry in progress message should be displayed.
All screens should have a Help button, F1 should work doing the same.
If the screen has an Control menu, then use all ungreyed options. (see below)
Move the Mouse Cursor over all Enterable Text Boxes. Cursor should change from arrow to Insert Bar.
If it doesn't then the text in the box should be grey or non-updateable. Refer to previous page.
Enter text into Box
Try to overflow the text by typing to many characters - should be stopped Check the field width with capitals W.
Enter invalid characters - Letters in amount fields, try strange characters like + , - * etc. in All fields.
SHIFT and Arrow should Select Characters. Selection should also be possible with mouse. Double Click should
select all text in box.
1.4. Option (Radio Buttons)
Left and Right arrows should move 'ON' Selection. So should Up and Down.. Select with mouse by clicking.
1.5. Check Boxes
Clicking with the mouse on the box, or on the text should SET/UNSET the box. SPACE should do the same.
If Command Button leads to another Screen, and if the user can enter or change details on the other screen then
the Text on the button should be followed by three dots.
All Buttons except for OK and Cancel should have a letter Access to them. This is indicated by a letter underlined
in the button text. The button should be activated by pressing ALT+Letter. Make sure there is no duplication.
Click each button once with the mouse - This should activate
Tab to each button - Press SPACE - This should activate
Tab to each button - Press RETURN - This should activate
The above are VERY IMPORTANT, and should be done for EVERY command Button.
Tab to another type of control (not a command button). One button on the screen should be default (indicated by
a thick black border). Pressing Return in ANY no command button control should activate it.
If there is a Cancel Button on the screen , then pressing <Esc> should activate it.
If pressing the Command button results in uncorrectable data e.g. closing an action step, there should be a message
phrased positively with Yes/No answers where Yes results in the completion of the action.
1.7. Drop Down List Boxes
Pressing the Arrow should give list of options. This List may be scrollable. You should not be able to type text
in the box.
Pressing a letter should bring you to the first item in the list with that start with that letter. Pressing ‘Ctrl - F4’
should open/drop down the list box.
Spacing should be compatible with the existing windows spacing (word etc.). Items should be in alphabetical
order with the exception of blank/none which is at the top or the bottom of the list box.
Drop down with the item selected should be display the list with the selected item on the top.
Make sure only one space appears, shouldn't have a blank line at the bottom.
1.8. Combo Boxes
Should allow text to be entered. Clicking Arrow should allow user to choose from list
1.9. List Boxes
Should allow a single selection to be chosen, by clicking with the mouse, or using the Up and Down Arrow keys.
Pressing a letter should take you to the first item in the list starting with that letter.
If there is a 'View' or 'Open' button beside the list box then double clicking on a line in the List Box, should act in the same way as selecting
and item in the list box, then clicking the command button.
Force the scroll bar to appear, make sure all the data can be seen in the box
Section 2 - Screen Validation Checklist
1. Does a failure of validation on every field cause a sensible user error message?
2. Is the user required to fix entries which have failed validation tests?
3. Have any fields got multiple validation rules and if so are all rules being applied?
4. If the user enters an invalid value and clicks on the OK button (i.e. does not TAB off the field) is the invalid entry identified and
highlighted correctly with an error message.?
5. Is validation consistently applied at screen level unless specifically required at field level?
6. For all numeric fields check whether negative numbers can and should be able to be entered.
7. For all numeric fields check the minimum and maximum values and also some mid-range values allowable?
8. For all character/alphanumeric fields check the field to ensure that there is a character limit specified and that this limit is exactly
correct for the specified database size?
9. Do all mandatory fields require user input?
10. If any of the database columns don't allow null values then the corresponding screen fields must be mandatory. (If any field which
initially was mandatory has become optional then check whether null values are allowed in this field.)
1. Is the data saved when the window is closed by double clicking on the close box?
2. Check the maximum field lengths to ensure that there are no truncated characters?
3. Where the database requires a value (other than null) then this should be defaulted into fields. The user must either enter an
alternative valid value or leave the default value intact.
4. Check maximum and minimum field values for numeric fields?
5. If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to
accept negative numbers?
6. If a set of radio buttons represent a fixed set of values such as A, B and C then what happens if a blank value is retrieved from
the database? (In some situations rows can be created on the database by other functions which are not screen based and thus
the required initial values can be incorrect.)
7. If a particular set of data is saved to the database check that each value gets saved fully to the database. i.e. Beware of
truncation (of strings) and rounding of numeric values.
1. Are the screen and field colours adjusted correctly for read-only mode?
2. Should a read-only mode be provided for this screen?
3. Are all fields and controls disabled in read-only mode?
4. Can the screen be accessed from the previous screen/menu/toolbar in read-only mode?
5. Can all screens available from this screen be accessed in read-only mode?
6. Check that no validation is performed in read-only mode.
• Assure that leap years are validated correctly & do not cause errors/miscalculations
• Assure that month code 00 and 13 are validated correctly & do not cause errors/miscalculations
• Assure that 00 and 13 are reported as errors
• Assure that day values 00 and 32 are validated correctly & do not cause errors/miscalculations
• Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/ miscalculations
• Assure that Feb. 30 is reported as an error
• Assure that century change is validated correctly & does not cause errors/ miscalculations
• Assure that out of cycle dates are validated correctly & do not cause errors/miscalculations
Add
View
Change
Delete
Continue - (i.e. continue saving changes or additions)
Add
View
Change
Delete
Cancel - (i.e. abandon changes or additions)
Tab Move to next Move to previous Move to next open Switch to previously used
active/editable field. active/editable field. Document or Child application. (Holding
window. (Adding SHIFT down the ALT key
reverses the order of displays all open
movement). applications).
Key Function
CTRL + Z Undo
CTRL + X Cut
CTRL + C Copy
CTRL + V Paste
CTRL + N New
CTRL + O Open
CTRL + P Print
CTRL + S Save
CTRL + B Bold*
CTRL + I Italic*
CTRL + U Underline*
* These shortcuts are suggested for text formatting applications, in the context for
which they make sense. Applications may use other modifiers for these operations.
The following edits, questions, and checks should be considered for all numeric fields.
• Signed or Unsigned
• Decimal Value
• Word Boundaries
• Italics
• Absent 123.
• Roman numerals
• Automatic Recovery?
• Reset Value
Reasonableness Checks .
• Numeric
• Alternative Source
Encrypted storage .
Validation • Table
• Computation
• Other report
• Lookup Key
Naming Conventions .
Compiler Requirements .
Note the edits that are performed by the programming language, tests that should be handled during unit testing, and checks that should be
done via integration or system testing.
Other issues:
1. Will boundaries and limits change over time?
2. Are they influenced by something else?
3. Will field accept operators? +, -, /, *, !, **, ^, %
4. Will the value change format?
64 bit to 16 bit
Character to numeric
Display to packed
Display to scientific notation
Display to words
5. Will value move across platforms?
6. Why is the field being treated as numeric?
7. Will voice recognition be necessary?
The following edits, questions, and checks should be considered for all date fields. Be aware that many programming languages combine
date and time into one data type.
Edit / Question Example
Required entry .
Century display 1850, 1999, 2001
Implied century Display last two digits of year (96,02). All dates are
assumed to be between 1950 and 2049.
Date display format • mm-dd-yy (12/01/96)
• mm-dd-ccyy (12/01/1996)
• dd-mm-yy (01/12/96)
• dd-mm-ccyy (01/12/1996)
• dd-mmm-yy (01-Jan-96)
• dd-mmm-ccyy (01-Jan-1996)
• dd-mm (day and month only)
• Complete date (December 1, 1996)
• Date, abbreviated month (Dec 1, 1996)
• Day included in date (Monday, November 7,
1996)
• yymmdd (960105)
• ccyymmdd (20011231)
• No year, text month and day (May 30th)
• Financial calculator (12.0196)
• Intensity
Leap year computations Any year number evenly divisible by 4, but not by 100,
unless it is also divisible by 400.
• Computed date
Authorization / Security required • Add
• Modify
• Delete
• View
Formats • Entry
• Storage (date, or relative day number)
• Print
• Display
Null date • 00/00/00 zeros
• bb/bb/bb spaces
Is the programming responsible for managing dates? .
Are autofill features utilized? .
Will the date field be used again elsewhere? .
Is this a standard date entry routine that is already tested? .
Are there other mechanisms to date stamp fields or records? .
Is the position of the date important? • On screen
• In a report
Are other events triggered by this date? .
Permissible dates • Holiday
• local, regional, national, international
• Weekend
• 12/01/??
Font • Acceptable fonts
• Largest font size that will display properly
• Default font
Correction Can erroneous dates be automatically corrected?
Error messages • Content
• Placement
• When displayed
Other issues:
1. Can invalid dates be passed to this routine? Should they be accepted?
2. Is there a standard date entry routine in the library?
3. Can new date formats be easily added and edited?
4. What is the source of the date: input documents, calendar on the wall, or field on another document?
5. Are there other mechanisms to change dates outside of this program?
6. Is this a date and time field?
Special Characters - Special characters may not be used on some windows entry screens, there also may be some conflicts with converting
data or using data from other systems.
Printer Configuration - Although Windows is designed to handle the printer setup for most applications, there are formatting differences
between printers and printer types. LaserJet printers do not behave the same as inkjets, nor do 300, 600, or 1200 DPI laser printers behave
the same across platforms.
Date Formats - The varying date formats sometimes cause troubles when they are being displayed in windows entry screens. This situation
could occur when programs are designed to handle a YY/MM/DD format and the date format being used is YYYY/MMM/DD.
Screen Savers - Some screen savers such as After Dark are memory or resource ‘hogs’ and have been known to cause troubles when
running other applications.
Speed Keys - Verify that there are no conflicting speed keys on the various screens. This is especially important on screens where the
buttons change.
Virus Protection Software - Some virus protection software can be configured too strictly. This may cause applications to run slowly or
incorrectly.
Disk Compression Tools - Some disk compression software may cause our applications to run slowly or incorrectly.
Multiple Open Windows - How does the system handle having multiple open windows, are there any resource errors.
Test Multiple Environments - Programs need to be tested under multiple configurations. The configurations seem to cause various results.
Test Multiple Operating Systems - Programs running under Win 95, Win NT, and Windows 3.11 do not behave the same in all
environments.
Corrupted DLL’s - Corrupted DLL’s will sometime cause applications not to execute or more damaging to run sporadically.
Incorrect DLL Versions - Corrupted DLL’s will sometime cause our applications not to execute or more damaging to run sporadically.
Missing DLL’s - Missing DLL’s will usually cause our applications not to execute.
Standard Program Look & Feel - The basic windows look & feel should be consistent across all windows and the entire application.
Windows buttons, windows and controls should follow the same standards for sizes.
Tab Order - When pressing the TAB key to change focus from object to object the procession should be logical.
Completion of Edits - The program should force the completion of edits for any screen before users have a change to exit program.
Saving Screen Sizes - Does the user have an opportunity to save the current screen sizes and position?
Operational Speed - Make sure that the system operates at a functional speed, databases, retrieval, and external references.
Testing Under Loaded Environments - Testing system functions when running various software programs "RESOURCE HOGS" (MS Word,
MS Excel, WP, etc.).
Resource Monitors - Resource monitors help track Windows resources which when expended will cause GPF’s.
Video Settings - Programmers tend to program at a 800 x 600 or higher resolution, when you run these programs at a default 640 x 480 it
tends to overfill the screen. Make sure the application is designed for the resolution used by customers.
Clicking on Objects Multiple Times - Will you get multiple instances of the same object or window with multiple clicks?
Saving Column Orders - Can the user save the orders of columns of the display windows?
Displaying Messages saying that the system is processing - When doing system processing do we display some information stating what
the system is doing?
Clicking on Other Objects While the System is Processing - Is processing interrupted? Do unexpected events occur after processing
finishes?
Large Fonts / Small Fonts - When switching between windows font sizes mixed results occur when designing in one mode and executing in
another.
Maximizing / Minimizing all windows - Do the actual screen elements resize? Do we use all of the available screen space when the screen
is maximized.
Setup Program - Does your setup program function correctly across multiple OS’s. Does the program prompt the user before overwriting
existing files.
Consistency in Operation - Consistent behavior of the program in all screens and the overall application.
Multiple Copies of the same Window - Can the program handle multiple copies of the same window? Can all of these windows be edited
concurrently?
Confirmation of Deletes - All deletes should require confirmations of the process before execution.
Selecting alternative language options - Will your program handle the use of other languages (FRENCH, SPANISH, ITALIAN, etc.)
Build the Plan
1. Analyze the product.
• What to Analyze
• Users (who they are and what they do)
• Operations (what it’s used for)
• Product Structure (code, files, etc.)
• Product Functions (what it does)
• Product Data (input, output, states, etc.)
• Platforms (external hardware and software)
• Ways to Analyze
• Perform product/prototype walkthrough.
• Review product and project documentation.
• Interview designers and users.
• Compare w/similar products.
• Possible Work Products
• Product coverage outline
• Annotated specifications
• Product Issue list
• Status Check
• Do designers approve of the product coverage outline?
• Do designers think you understand the product?
• Can you visualize the product and predict behavior?
• Are you able to produce test data (input and results)?
• Can you configure and operate the product?
• Do you understand how the product will be used?
• Are you aware of gaps or inconsistencies in the design?
• Do you have remaining questions regarding the product?
• What to Analyze
• Threats
• Product vulnerabilities
• Failure modes
• Victim impact
• Ways to Analyze
• Review requirements and specifications.
• Review problem occurrences.
• Interview designers and users.
• Review product against risk heuristics and quality criteria categories.
• Identify general fault/failure patterns.
• Possible Work Products
• Component risk matrices
• Failure mode outline
• Status Check
• Do the designers and users concur with the risk analysis?
• Will you be able to detect all significant kinds of problems, should they occur during testing?
• Do you know where to focus testing effort for maximum effectiveness?
• Can the designers do anything to make important problems easier to detect, or less likely to occur?
• How will you discover if your risk analysis is accurate?
• General Strategies
• Domain testing (including boundaries)
• User testing
• Stress testing
• Regression testing
• Sequence testing
• State testing
• Specification-based testing
• Structural testing (e.g. unit testing)
•
• Ways to Plan
• Match strategies to risks and product areas.
• Visualize specific and practical strategies.
• Look for automation opportunities.
• Prototype test probes and harnesses.
• Don’t overplan. Let testers use their brains.
• Possible Work Products
• Itemized statement of each test strategy chosen and how it will be applied.
• Risk/task matrix.
• List of issues or challenges inherent in the chosen strategies.
• Advisory of poorly covered parts of the product.
• Test cases (if required)
• Status Check
• Do designers concur with the test strategy?
• Has the strategy made use of every available resource and helper?
• Is the test strategy too generic could it just as easily apply to any product?
• Will the strategy reveal all important problems?
4. Plan logistics.
• Logistical Areas
• Test effort estimation and scheduling
• Testability engineering
• Test team staffing (right skills)
• Tester training and supervision
• Tester task assignments
• Product information gathering and management
• Project meetings, communication, and coordination
• Relations with all other project functions, including development
• Test platform acquisition and configuration
• Possible Work Products
• Issues list
• Project risk analysis
• Responsibility matrix
• Test schedule
• Agreements and protocols
• Test tools and automation
• Stubbing and simulation needs
• Test suite management and maintenance
• Build and transmittal protocol
• Test cycle administration
• Problem reporting system and protocol
• Test status reporting protocol
• Code freeze and incremental testing
• Pressure management in end game
• Sign-off protocol
• Evaluation of test effectiveness
• Status Check
• Do the logistics of the project support the test strategy?
• Are there any problems that block testing?
• Are the logistics and strategy adaptable in the face of foreseeable problems?
• Can you start testing now and sort out the rest of the issues later?
• Ways to Share
• Engage designers and stakeholders in the test planning process.
• Actively solicit opinions about the test plan.
• Do everything possible to help the developers succeed.
• Help the developers understand how what they do impacts testing.
• Talk to technical writers and technical support people about sharing quality information.
• Get designers and developers to review and approve all reference materials.
• Record and reinforce agreements.
• Get people to review the plan in pieces.
• Improve reviewability by minimizing unnecessary text in test plan documents.
• Goals
• Common understanding of the test process.
• Common commitment to the test process.
• Reasonable participation in the test process.
• Management has reasonable expectations about the test process.
• Status Check
• Is the project team paying attention to the test plan?
• Does the project team, especially first line management, understand the role of the test team?
• Does the project team feel that the test team has the best interests of the project at heart?
• Is there an adversarial or constructive relationship between the test team and the rest of the project?
Does any member of the project team feel that the testers are “off on a tangent” rather than focused on important testing tasks?
Test Plan
A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The
process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough
enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be
included in a test plan, depending on the particular project:
• Title
• Identification of software including version/release numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test plans, etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Overall software project organization and personnel/contact-info/responsibilties
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems
• Test environment validity analysis - differences between the test and production systems and their impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help
describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix - glossary, acronyms, etc.
1. Introduction
• Purpose
• Scope
2. Applicability
• Applicable Documents
• Documents
3. Program Management and Planning
• The SQA Plan
• Organization
• Tasks
4. Software Training
• SQA Personnel
• Software Developer Training Certification
5. SQA Program Requirements
• Program Resources Allocation Monitoring
• SQA Program Audits
1. Scheduled Audits
2. Unscheduled Audits
3. Audits of the SQA Organization
4. Audit Reports
• SQA Records
• SQA Status Reports
• Software Documentation
• Requirements Traceability
• Software Development Process
• Project reviews
1. Formal Reviews
2. Informal Reviews
• Tools and Techniques
• Software Configuration Management
• Release Procedures
• Change Control
• Problem Reporting
• Software Testing
1. Unit Test
2. Integration Test
3. System Testing
4. Validation Testing
1. BACKGROUND
2. INTRODUCTION
3. ASSUMPTIONS
4. TEST ITEMS
List each of the items (programs) to be tested.
5. FEATURES TO BE TESTED
List each of the features (functions or requirements) which will be tested or demonstrated by the test.
6. FEATURES NOT TO BE TESTED
Explicitly lists each feature, function, or requirement which won't be tested and why not.
7. APPROACH
Describe the data flows and test philosophy.
Simulation or Live execution, Etc.
8. ITEM PASS/FAIL CRITERIA Blanket statement
Itemized list of expected output and tolerances
9. SUSPENSION/RESUMPTION CRITERIA
Must the test run from start to completion?
Under what circumstances may it be resumed in the middle?
Establish check-points in long tests.
10. TEST DELIVERABLES
What, besides software, will be delivered?
Test report
Test software
11. TESTING TASKS Functional tasks (e.g., equipment set up)
Administrative tasks
12. ENVIRONMENTAL NEEDS
Security clearance
Office space & equipment
Hardware/software requirements
13. RESPONSIBILITIES
Who does the tasks in Section 10?
What does the user do?
14. STAFFING & TRAINING
15. SCHEDULE
16. RESOURCES
17. RISKS & CONTINGENCIES
18. APPROVALS
Test plan
define the testing approach and resources and schedule the testing activities.
Business requirement
Specify requirements of testing and identify the specific features to be test by design.
Test case
Define a test case identified by a test-design specification.
|---------------------------------------------------------|
| Test Case |
|---------------------------------------------------------|
| Test Case ID: |
| |
| Test Description: |
| |
| Revision History: |
| |
| Date Created: |
| |
| Function to be tested: |
|---------------------------------------------------------|
| Environment: |
| |
| Test Setup: |
| |
| Test Execution: |
| |
| 1. |
| |
| 2. |
| |
| 3. |
|---------------------------------------------------------|
| |
| Expect Results: |
| |
| Actual Results: |
|---------------------------------------------------------|
| Completed: |
| |
| Signed Out: |
|---------------------------------------------------------|
Test Case Form The test case form is used to track all the test cases. It should include the test case numberer for all the tests being
performed, the name of the test case, the process, the business application condition that was tested, all associated scenarios, and the
priority of the test case. The form should also includr the date, the page number of the particular form you are using, and the system and
integration information. This form is important because it tracks all the test cases, allows the test lead or another tester to reference the test
case, and shows all the pertinent information of thecase at a glance. This information should be placed in a database or Web site so all
members of the team can review the information.
|----------------------------------------------------------|--------------|
| Test Case | Page: |
|----------------------------------------------------------|--------------|
| System: | Date: |
|----------------------------------------------------------|--------------|
| Test | Test | | Applicarion | Associated | |
| Case # | Case Name | Process | Conditions | Tasks | Priority |
|--------|-----------|----------|-------------|------------|--------------|
| | | | | | |
|--------|-----------|----------|-------------|------------|--------------|
| | | | | | |
|-------------------------------------------------------------------------|
Log for tracking test cases Track test cases and test results
Test Case Tracking
|-------------------------------------------------------------------------|
| | | Test | Test | Desired | Actual |
| Date | Function | Case # | Scenario | Results | Results |
|--------|-----------|----------|-------------|------------|--------------|
| | | | | | |
|--------|-----------|----------|-------------|------------|--------------|
| | | | | | |
|-------------------------------------------------------------------------|
Test matrix Track test cases and errors
|------------------------------------------------------------------------------|
| Test Case: | Test | Test Cases | Fass/ | No. of| Bug# | |
| File Open# | Description | Samples | Fail | Bugs | | Comments |
|-------------|-----------------|------------|-------|-------|------|----------|
| 1.1 |Test file types | 1.1 | P/F | # | # | |
| |Support by | | | | | |
| |the program | | | | | |
|-------------|-----------------|------------|-------|-------|------|----------|
| 1.2 |Verify the | 1.2 | P/F | # | # | |
| |different ways | | | | | |
| |to open file | | | | | |
| |(mouse, keyboard,| | | | | |
| | and accelerated | | | | | |
| | keys). | | | | | |
|-------------|-----------------|------------|-------|-------|------|----------|
| 1.3 |Verify the file | 1.3 | P/F | # | # | |
| |that can be | | | | | |
| |opened from the | | | | | |
| |local dirves as | | | | | |
| |well as network | | | | | |
| | | | | | | |
|------------------------------------------------------------------------------|
Bug tracking report Track errors as they occur and how they were corrected.
|---------------------------------------------------------|
| [Bug's Report Title] |
|---------------------------------------------------------|
| [Steps Involved to Reproduce the Error] |
|---------------------------------------------------------|
| [Expected Result] |
|---------------------------------------------------------|
| [Actual Result] |
|---------------------------------------------------------|
| [Note] |
|---------------------------------------------------------|
Weekly status report Give management a weekly progress report of the testing activity.
|------------------------------------------------------------|
| Status for [Person's Name] |
| Week Ending [End of Week Date] |
|------------------------------------------------------------|
| This Week: |
| 1. Details of the progress of the week, what was scheduled|
| 2. |
| 3. |
|------------------------------------------------------------|
| Goals for Next Week: |
| 1. Detail of what should be accomplished for the next |
| week |
| Issues: |
| 1. Issues that need to be addressed and handled. |
| 2. |
| 3. |
|------------------------------------------------------------|
Test Log Adminstrator The log tracks all informatiom from the previous example. This log will track test log IDs, test case or test script ID,
the test event results, the action that was taken, and the date the action was taken. The log will also document the system on which the test
was runrun and the page number for the log. This log is important for tracking all the test logs and showing at a glance the event that
resulted and the action taken from the test. This information is critical for tracking and registering all log entries.
Test log
|-------------------------------------------------------------------------|
| Test Log |
|-------------------------------------------------------------------------|
| System: | Page: |
|----------------------------------------------------------|--------------|
| Test | Test Case | | | Action |
| Log ID | Test Script ID | Test Event Result | Action| Date |
|--------|----------------|------------------------|-------|--------------|
| | | | | |
|--------|----------------|------------------------|-------|--------------|
| | | | | |
|-------------------------------------------------------------------------|
Test script Set up the sequential step-by-step test, giveing expected and actual results.
|-------------------------------------------------------------------------|
| Test Script |
|----------------------------------------------------------|--------------|
| Test Script Number | Piority: |
|----------------------------------------------------------|--------------|
| System Tested | Page: |
|----------------------------------------------------------|--------------|
| Test case Number: | Data: | Tester: |
|-----------------------------------|----------------------|--------------|
| / | | | | Expected | Actual | Test |
|v | Step | Action | Data Entry | Results | Results | Log ID |
|---|------|----------|-------------|----------|-----------|--------------|
| | | | | | | |
|---|------|----------|-------------|----------|-----------|--------------|
| | | | | | | |
|-------------------------------------------------------------------------|
Test Case Information
|---------------------------------------------------------|
| Test Case ID: F1006 |
| Test Description: Verify A |
| Revision History: Refer to form F1050 |
| Date Created: 3/23/00 1.0 - Tester Name - Created |
| Function to be Tested: A |
|---------------------------------------------------------|
| Environment: Windows 2000 |
| Test Setup: N/A |
| 1. Open the program |
| 2. Open a new document |
| 3. Type the text |
| 4. select the text |
|---------------------------------------------------------|
| Expected Result: A informats correctly to the text |
| Actual Results: Pass |
| |
| Completed: Date |
| Signed Out: Name of Tester |
|---------------------------------------------------------|
Issues log Itemize and track specific testing issues and resolutions.
|---------------------------------------------------------------------|
|Ref# |Type of Issue | Priority | Description |
|-----|--------------|-----------|------------------------------------|
| | | | |
|-----|--------------|-----------|------------------------------------|
| | | | |
|---------------------------------------------------------------------|
Resolution The resultion log is used to track issues and how they have been resolved. It uses the reference number assigned to previous
documents, the status of the problem, the last action taken on the problem, and who took that action. It will also report who make the
decision for the resolution and how the resolution will be handled. This document will show the testers what is being done for documented
problems and if their test is contingent on the resolution of a previous bug or problem.
|--------------------------------------------------------------------|
| | | Last | | | | |
| | | Action| | Parties | Decision | |
|Ref# |Status | Date | Action | Involved | Made | Resolution |
|-----|-------|-------|--------|----------|----------|---------------|
| | | | | | | |
|-----|-------|-------|--------|----------|----------|---------------|
| | | | | | | |
|--------------------------------------------------------------------|
Test Bed The test bed is the testing environment used for all stages of testing
|---------------------------------------------------------------------------------
--|
| Test Bed
|
|------------------------|--------------------|-----------------------------------
--|
| Number of Application: | Date: | Lead Engineer Assigned to project
|
|---------------------------------------|-----------------------------------------
--|
| Dates Application Will be Tested | Anticipated Problem
|
|-------------------------------|------------------------------|------------------
--|
| Dates for Setting Up Test Bed:| Engineer Assigned to Project:| Addition
Resources |
|-------------------------------|------------------------------|------------------
--|
| Software / Hardware | Verision / TypeEngineer | Problems
|
|-------------------------------|------------------------------|------------------
--|
| | |
|
|-------------------------------|------------------------------|------------------
--|
| | |
|
|-------------------------------|------------------------------|------------------
--|
| | |
|
|---------------------------------------------------------------------------------
--|
What makes a good software tester?
1. Know Programming. Might as well start out with the most controversial one. There's a popular myth that testing can be staffed with
people who have little or no programming knowledge. It doesn't work, even though it is an unfortunately common approach. There are two
main reasons why it doesn't work.
(1) They're testing software. Without knowing programming, they can't have any real insights into the kinds of bugs that come into software
and the likeliest place to find them. There's never enough time to test "completely", so all software testing is a compromise between
available resources and thoroughness. The tester must optimize scarce resources and that means focusing on where the bugs are likely to
be. If you don't know programming, you're unlikely to have useful intuition about where to look.
(2) All but the simplest (and therefore, ineffectual) testing methods are tool- and technology-intensive. The tools, both as testing products
and as mental disciplines, all presume programming knowledge. Without programmer training, most test techniques (and the tools based on
those techniques) are unavailable. The tester who doesn't know programming will always be restricted to the use of ad-hoc techniques and
the most simplistic tools.
Taking entry-level programmers and putting them into a test organization is not a good idea because:
3. Intelligence.
Back in the 60's, there were many studies done to try to predict the ideal qualities for programmers. There was a shortage and we were
dipping into other fields for trainees. The most infamous of these was IBM's programmers' Aptitude Test (PAT). Strangely enough, despite
the fact the IBM later repudiated this test, it continues to be (ab)used as a benchmark for predicting programmer aptitude. What IBM
learned with follow-on research is that the single most important quality for programmers is raw intelligence-good programmers are really
smart people-and so are good testers.
4. Hyper-Sensitivity to Little Things.
Good testers notice little things that others (including programmers) miss or ignore. Testers see symptoms, not bugs. We know that a given
bug can have many different symptoms, ranging from innocuous to catastrophic. We know that the symptoms of a bug are arbitrarily related
in severity to the cause. Consequently, there is no such thing as a minor symptom-because a symptom isn't a bug. It is only after the
symptom is fully explained (i.e., fully debugged) that you have the right to say if the bug that caused that symptom is minor or major.
Therefore, anything at all out of the ordinary is worth pursuing. The screen flickered this time, but not last time-a bug. The keyboard is a little
sticky-another bug. The account balance is off by 0.01 cents-great bug. Good testers notice such little things and use them as an entree to
finding a closely-related set of inputs that will cause a catastrophic failure and therefore get the programmers' attention. Luckily, this
attribute can be learned through training.
A good indicator of the kind of skill I'm looking for here is the ability to do crossword puzzles in ink. This skill, research has shown, also
correlates well with programmer and tester aptitude. This skill is very similar to the kind of unresolved chaos with which the tester must daily
deal. Here's the theory behind the notion. If you do a crossword puzzle in ink, you can't put down a word, or even part of a word, until you
have confirmed it by a compatible cross-word. So you keep a dozen tentative entries unmarked and when by some process or another, you
realize that there is a compatible cross-word, you enter them both. You keep score by how many corrections you have to make-not by
merely finishing the puzzle, because that's a given. I've done many informal polls of this aptitude at my seminars and found a much higher
percentage of crossword-puzzles-in-ink afficionados than you'd get in a normal population.
6. People Skills.
Here's another area in which testers and programmers can differ. You can be an effective programmer even if you are hostile and anti-
social; that won't work for a tester. Testers can take a lot of abuse from outraged programmers. A sense of humor and a thick skin will help
the tester survive. Testers may have to be diplomatic when confronting a senior programmer with a fundamental goof. Diplomacy, tact, a
ready smile-all work to the independent tester's advantage. This may explain one of the (good) reasons that there are so many women in
testing. Women are generally acknowledged to have more highly developed people skills than comparable men-whether it is something
innate on the X chromosome as some people contend or whether it is that without superior people skills women are unlikely to make it
through engineering school and into an engineering career, I don't know and won't attempt to say. But the fact is there and those sharply-
honed people skills are important.
7. Tenacity.
An ability to reach compromises and consensus can be at the expense of tenacity. That's the other side of the people skills. Being socially
smart and diplomatic doesn't mean being indecisive or a limp rag that anyone can walk all over. The best testers are both-socially adept
and tenacious where it matters. The best testers are so skillful at it that the programmer never realizes that they've been had. Tenacious-my
picture is that of an angry pitbull fastened on a burglar's rear-end. Good testers don You can't intimidate them-even by pulling rank. They'll
need high-level backing, of course, if they're to get you the quality your product and market demands.
8. Organized.
I can't imagine a scatter-brained tester. There's just too much to keep track of to trust to memory. Good testers use files, data bases, and all
the other accouterments of an organized mind. They make up checklists to keep themselves on track. They recognize that they too can
make mistakes, so they double-check their findings. They have the facts and figures to support their position. When they claim that there's a
bug-believe it, because if the developers don't, the tester will flood them with well-organized, overwhelming, evidence.
A consequence of a well-organized mind is a facility for good written and oral communications. As a writer and editor, I've learned that the
inability to express oneself clearly in writing is often symptomatic of a disorganized mind. I don't mean that we expect everyone to write
deathless prose like a Hemingway or Melville. Good technical writing is well-organized, clear, and straightforward: and it doesn't depend on
a 500,000 word vocabulary. True, there are some unfortunate individuals who express themselves superbly in writing but fall apart in an
oral presentation- but they are typically a pathological exception. Usually, a well-organized mind results in clear (even if not inspired) writing
and clear writing can usually be transformed through training into good oral presentation skills.
9. Skeptical.
That doesn't mean hostile, though. I mean skepticism in the sense that nothing is taken for granted and that all is fit to be questioned. Only
tangible evidence in documents, specifications, code, and test results matter. While they may patiently listen to the reassuring, comfortable
words from the programmers ("Trust me. I know where the bugs are.")-and do it with a smile-they ignore all such in-substantive assurances.
13. Honest.
Testers are fundamentally honest and incorruptible. They'll compromise if they have to, but they'll righteously agonize over it. This
fundamental honesty extends to a brutally realistic understanding of their own limitations as a human being. They accept the idea that they
are no better and no worse, and therefore no less error-prone than their programming counterparts. So they apply the same kind of self-
assessment procedures that good programmers will. They'll do test inspections just like programmers do code inspections. The greatest
possible crime in a tester's eye is to fake test results.
Personal Requirements For Software Quality Assurance Engineers
Challenges
Rapidly changing requirements
Foresee defects that are likely to happen in production
Monitor and Improve the software development processes
Ensure that standards and procedures are being followed
Customer Satisfaction and confidence
Compete the Market
Compuware
Compuware Corporation is a recognized industry leader in enterprise software and IT services that help maximize the value of technology
investments. We offer a powerful set of integrated solutions for enterprise IT including IT governance, application development, quality
assurance and application service management. Compuware is one of the largest software test tool vendors. It has a turnover in excess of
$2 billion and staff of more than 15,000. 9,500 of these are professional services staff with skills covering all the development lifecycle.
Compuware does not only supply the tools but will provide staff to initially develop your test suite and handover to internal staff as required.
Compuware's test tool set is second to Rational only on the windows platform (for coverage) but for complete coverage across platforms
including mainframe and Unix they are the best. So for the larger company that requires a complete testing solution to cover these
platforms it is probably best to start with Compuware as they will offer unit test, database test, mainframe, functional, load, web test, defect
tracking and more in their tool set. No other vendor can offer this range.
Compuware Website http://www.compuware.com/
Compuware Software Test Tools
Compuware provides tools for Requirements Management, Risk-based Test Management, Unit, Functional and Load Testing, Test Data
Management, and Quality Discipline.
Compuware Application Reliability Solution (CARS) offers a more effective approach. CARS combines our patented methodology with
innovative enterprise-wide technologies and certified quality assurance expertise to instill a consistent discipline across development,
quality assurance and operations. By following this systematic testing approach, you:
- adhere to a consistent quality assurance process
- deliver the quality metrics required to make a sound go/no go decision
- ensure the most critical of business requirements are met
QACenter Enterprise Edition: Requirements Management Tool. Align testing with business requirements. With QACenter Enterprise Edition
you can:
- prioritize testing activities through the assignment of risk
- align test requirements with business goals
- quickly measure progress and effectiveness of test activities
- centrally manage and execute various manual and automated testing assets
- automate the process of entering, tracking and resolving defects found during testing.
Compuware DevPartner: A family of products providing a comprehensive development, debugging and tuning solution to the challenges of
application development, from concept to coding to security and finally to completion. DevPartner products cover Microsoft, Java™, 64-bit
and driver development, helping you improve productivity and increase software reliability—from simple two-tier applications to complex
distributed and web-based systems.
Xpediter: Analyze, test and debug mainframe applications. With Xpediter you can:
- Analyze programs and applications
- Test and debug programs interactively
- Understand and control the process of data and logic
- Identify what has executed within an application
- Debug DB2 Stored Procedures
- Test date and time related logic
File-AID: Test data management tool. Help you pull together test data from multiple source to create, move, convert, reformat, subset, and
validate your test date bed. The test methodology has helped organizations test more efficiently and effectively.
Rational
Rational is now part of IBM, which is leader in the invention, development and manufacture of the industry's most advanced information
technologies, including computer systems, software, storage systems and microelectronics. Rational offers the most complete lifecycle
toolset (including testing).
When it comes to Object Oriented development they are the acknowledged leaders with most of the leading OO experts working for them.
Some of their products are worldwide leaders e.g. Rational Rose, Clearcase, RequistePro, etc.
Their Unified Process is a very good development model that I have been involved with which allows mapping of requirements to use
cases, test cases and a whole set of tools to support the process.
If you are developing products using an OO approach then you should include Rational in the evaluation.
Rational Website http://www-306.ibm.com/software/rational/
Rational Tools
Rational Functional Tester - An advanced, automated functional and regression testing tool for testers and GUI developers who need
superior control for testing Java, Microsoft Visual Studio .NET, and Web-based applications.
Rational Manual Tester - A manual test authoring and execution tool for testers and business analysts who want to improve the speed,
breadth, and reliability of their manual testing efforts. Promotes test step reuse to reduce the impact of software change on manual test
maintenance activities.
Rational Performance Tester - IBM Rational Performance Tester is a load and performance testing solution for teams concerned about the
scalability of their Web-based applications. Combining ease of use with deep analysis capabilities, Rational Performance Tester simplifies
test creation, load generation, and data collection to help ensure that applications can scale to thousands of concurrent users.
Rational Purify - Advanced runtime and memory management error detection. Does not require access to source code and can thus be
used with third-party libraries in addition to home-grown code.
Rational Robot - General-purpose test automation tool for QA teams who want to perform functional testing of client/server applications.
Rational Test RealTime - Cross-platform solution for component testing and runtime analysis. Designed specifically for those who write
code for embedded and other types of pervasive computing products.
Mercury Interactive
Mercury is the global leader in Business Technology Optimization (BTO) software and services. Our BTO products and solutions help
customers govern and manage IT and optimize application quality, performance, and availability. Mercury enables IT organizations to shift
their focus from managing IT projects to optimizing business outcomes. Global 2000 companies and government agencies worldwide rely
on Mercury to lower IT costs, reduce risks, and optimize for growth; address strategic IT initiatives; and optimize enterprise application
environments like J2EE, .NET, and ERP/CRM.
Mercury has a number of complimentary tools TestDirector being the most integrated one. They have a lot of third party support and test
tools are usually compared first against Mercury than the others. Mercury tends to use third party companies to supply professional services
support for their tools (e.g. if you require onsite development of test suites).
Mercury Website http://www.mercury.com/
Mercury Interactive Software Test Tools
Mercury TestDirector: allows you to deploy high-quality applications quickly and effectively by providing a consistent, repeatable process for
gathering requirements, planning and scheduling tests, analyzing results, and managing defects and issues. TestDirector is a single, Web-
based application for all essential aspects of test management — Requirements Management, Test Plan, Test Lab, and Defects
Management. You can leverage these core modules either as a standalone solution or integrated within a global Quality Center of
Excellence environment.
Mercury QuickTest Professional: provides the industry's best solution for functional test and regression test automation - addressing every
major software application and environment. This next-generation automated testing solution deploys the concept of Keyword-driven testing
to radically simplify test creation and maintenance. Unique to QuickTest Professional’s Keyword-driven approach, test automation experts
have full access to the underlying test and object properties, via an integrated scripting and debugging environment that is round-trip
synchronized with the Keyword View.
Mercury WinRunner: offers your organization a powerful tool for enterprisewide functional and regression testing. Mercury WinRunner
captures, verifies, and replays user interactions automatically, so you can identify defects and ensure that business processes work
flawlessly upon deployment and remain reliable. With Mercury WinRunner, your organization gains several advantages, including:
- Reduced testing time by automating repetitive tasks.
- Optimized testing efforts by covering diverse environments with a single testing tool.
- Maximized return on investment through modifying and reusing test scripts as the application evolves.
Mercury Business Process Testing: the industry’s first web-based test automation solution, can add real value. It enables non-technical
business analysts to build, data-drive, and execute test automation without any programming knowledge. By empowering business analysts
and quality automation engineers to collaborate more effectively using a consistent and standardized process, you can:
Improve the productivity of your testing teams.
Detect and diagnose performance problems before system downtime occurs.
Increase the overall quality of your applications.
ActiveTest. Can help ensure the users have a positive experience with a Web site. ActiveTest is a hosted, Web-based testing service that
conducts full scale stress testing of your Web site. By emulating the behavior of thousands of customers using your Web application,
ActiveTest identifies bottlenecks and capacity constraints before they affect your constomers.
Mercury LoadRunner: prevents costly performance problems in production by detecting bottlenecks before a new system or upgrade is
deployed. You can verify that new or upgraded applications will deliver intended business outcomes before go-live, preventing over-
spending on hardware and infrastructure. It is the industry-standard load testing solution for predicting system behavior and performance,
and the only integrated load testing, tuning, and diagnostics solution in the market today. With LoadRunner web testing software, you can
measure end-to-end performance, diagnose application and system bottlenecks, and tune for better performance—all from a single point of
control. It supports a wide range of enterprise environments, including Web Services, J2EE, and .NET.
Segue
Segue Software is a global leader dedicated to delivering quality optimization solutions that ensure the accuracy and performance of
enterprise applications. Today Segue® solutions are successfully meeting the quality optimization challenges of more than 2,000 customers
around the world, including 61% of the Fortune 100. Our results-oriented approach helps our customers optimize quality every step of the
way.
Anyone who has used SilkTest along side any of the other test tools will agree that this is the most function rich out the box. However the
learning curve (if you have no programming experience) is the steepest. In my opinion it provides the most robust facilities; an object map,
test recovery facilities and object-based development language. Segue's performance test tool SilkPerformer also performs very well
compared to it's rivals e.g. LoadRunner, LoadTest, etc.
Segue Website see http://www.segue.com/.
Segue Software Test Tools
SilkCentral Test Manager - Automate your testing process for optimal quality and productivity. SilkCentral Test Manager is an all-inclusive
test management system that builds quality and productivity into the testing process to speed the delivery of successful enterprise
applications. It lets you plan, document and manage each step of the testing cycle from capturing and organizing key business
requirements, tracing them through execution … designing the optimal test plans … scheduling tests for unattended execution … tracking
the progress of manual and automated tests … identifying the features at risk … and assessing when the application is ready to go live.
SilkCentral Issue Manager - Resolve issues quickly & reliably by automating the tracking process. An estimated 80% of all software costs is
spent on resolving application defects. With SilkCentral™ Issue Manager, you can reduce the cost and speed the resolution of defects and
other issues throughout the entire application lifecycle. SilkCentral Issue Manager features a flexible, action-driven workflow that adapts
easily to your current business processes and optimizes defect tracking by automatically advancing each issue to its next stage. Its Web
user interface provides 24x7x365 access to a central repository of all defect-related information - simplifying usage among geographically
dispersed groups and promoting collaboration among different departments. Meanwhile insightful reports enable you to determine project
readiness based on the status of important issues.
SilkTest - Meet the time-to-market & quality goals of enterprise applications. SilkTest is the industry-leading automated tool for testing the
functionality of enterprise applications in any environment. It lets you thoroughly verify application reliability within the confines of today's
short testing cycles by leveraging the accuracy, consistency and time-saving benefits of Segue's automated testing technology. Designed
for ease of use, SilkTest includes a host of productivity-boosting features that let both novice and expert users create functional tests
quickly, execute them automatically and analyze results accurately. With less time spent testing, your QA staff can expand test coverage
and optimize application quality. In addition to validating the full functionality of an application prior to its initial release, users can easily
evaluate the impact of new enhancements on existing functionality by simply reusing existing test cases.
SilkTest International - Ensure the reliability of multi-lingual enterprise applications. When it comes to localized versions of global
applications, companies traditionally resort to second-class manual testing - a time-consuming and costly process which leaves a large
margin of error. SilkTest International changes all that by providing a quick, accurate and fully automated way to test localized applications.
SilkPerformer Component - Optimize component quality and reduce costs by testing remote application components early in development.
As the central building blocks of a distributed application, remote application components are key to ensuring application quality.
SilkPerformer® Component Test Edition from Segue® lets you test and optimize three major quality aspects of critical remote components
early in the application lifecycle - even before client applications are available.
SilkPerformer - Test the limits of your enterprise applications. SilkPerformer® is the industry's most powerful - yet easiest to use -
automated load and performance testing solution for optimizing the performance, scalability and reliability of mission-critical enterprise
applications. With SilkPerformer, you can accurately predict the "breaking points" in your application and its underlying infrastructure before
it is deployed, regardless of its size or complexity. SilkPerformer has the power to simulate thousands of simultaneous users working with
multiple computing environments and interacting with various application environments such as Web, client/server, Citrix® MetaFrame®, or
ERP/CRM systems - all with a single script and one or more test machines. Yet its visual approach to scripting and root-cause analysis
makes it amazingly simple and efficient to use. So you can create realistic load tests easily, find and fix bottlenecks quickly, and deliver
high-performance applications faster than ever.
SilkCentral Performance Manager - Optimize the availability, performance and accuracy of mission-critical applications. SilkCentral™
Performance Manager is an application performance management solution for optimizing the quality of mission-critical applications.
SilkCentral Performance Manager monitors the end-user experience on three dimensions: availability, accuracy and performance. Active
monitoring utilizes synthetic business transactions for service-level and performance monitoring, while passive monitoring provides an
understanding of real-user behavior by recording actual user transactions.
Facilita
Forecast by Facilita is mainly used for performance testing but functionally it is as strong as the other performance tools and the cost saving
is usually at least 50% less.
Facilita Website http://www.facilita.com/
Facilita Software Test Tools
forecast - The Load and Performance Test Tool. A non-intrusive tool for system load testing, performance measurement and multi-user
functional testing. Load test your enterprise infrastructure by simulating thousands of users performing realistic user actions.
Empirix
Empirix is the leading provider of integrated testing and management solutions for Web and voice applications and VoIP networks.
Empirix Website http://www.empirix.com/
Empirix Software Test Tools
e-TEST suite - A powerful, easy-to-use application testing solution that ensures the quality, performance, and reliability of your Web
applications and Web Services. This integrated, full lifecycle solution allows you to define and manage your application testing process,
validate application functionality, and ensure that your applications will perform under load. With e-TEST suite, you can deploy your Web
applications and Web Services in less time while maximizing the efficiency of your testing team.
e-Manager Enterprise - A comprehensive test management solution that allows you to plan, document, and manage the entire application
testing process. Its intuitive, Web-based interface and integrated management modules allow you to set up a customized testing process to
fit the needs of your organization.
e-Tester - A flexible, easy-to-use solution for automated functional and regression testing of your Web applications and Web Services. It
provides the fastest way to create automated scripts that emulate complex Web transactions. e-Tester then allows you to use these scripts
for automated functional and regression testing. The same scripts can also be used in e-Load for load and performance testing and in
OneSight for post-deployment application management.
e-Load - A powerful solution that enables you to easily and accurately test the performance and scalability of your Web applications and
Web Services. Using e-Load you can simulate hundreds or thousands of concurrent users, executing real business transactions, to analyze
how well your Web applications will perform under load. It also allows you to monitor the performance of your back-end application
infrastructure, during your load test, to identify bottlenecks and help you tune application performance. e-Load is fully accessible via a Web
browser interface, which enables testers and developers to collaborate during the application testing and tuning process.
OpenSTA
OpenSTA is a distributed software testing architecture designed around CORBA, it was originally developed to be commercial software by
CYRANO. The current toolset has the capability of performing scripted HTTP and HTTPS heavy load tests with performance
measurements from Win32 platforms. However, the architectural design means it could be capable of much more.
OpenSTA Website http://www.opensta.org/
The OpenSTA toolset is Open Source software licensed under the GNU GPL (General Public License), this means it is free and will always
remain free.
AutoTester
AutoTester was founded in 1985 and was the first automated test tool company. Since its inception, AutoTester has continually led the
automated software testing industry with its innovative and powerful testing tools designed to help customers worldwide with e-business,
SAP R/3, ERP, and Windows software quality initiatives.
AutoTester Website http://www.autotester.com/
AutoTester Software Test Tools
AutoTester ONE - Functional, regression, and systems integration testing of Windows, Client Server, Host/Legacy, or Web applications.
Provides true end-to-end testing of applications
Parasoft
Parasoft is the leading provider of innovative solutions that automatically identify and prevent software errors from reoccurring in the
development and QA process. For more than 15 years, this privately held company has delivered easy-to-use, scalable and customizable
error prevention tools and methodologies through its portfolio of leading brands, including, Jtest, C++test, Insure++ and SOAtest.
Parasoft Website http://www.parasoft.com/
Parasoft Software Test Tools
WebKing - An automated Web application testing product that automates the most critical Web verification practices: static analysis,
functional/regression testing, and load testing.
Jtest - An automated Java unit testing and coding standard analysis product. It automatically generates and executes JUnit tests for instant
verification, and allows users to extend these tests. In addition, it checks whether code follows over 500 coding standard rules and
automatically corrects violations of over 200 rules.
C++test - An automated C/C++unit testing and coding standard analysis product. It automatically generates and executes unit tests for
instant verification, and allows users to customize and extend these tests as needed. In addition, it checks whether code follows over 700
coding standard rules
SOAtest - An automated Web services testing product that allows users to verify all aspects of a Web service, from WSDL validation, to unit
and functional testing of the client and server, to performance testing. SOA Test addresses key Web services and SOA development issues
such as interoperability, security, change management, and scalability.
.TEST - An automated unit testing and coding standard analysis product that tests classes written on the Microsoft® .NET Framework
without requiring developers to write a single test case or stub.
winperl.com
winperl.com is a small company with a simple nice product called WinPerl++/GUIdo.
WinPerl Website http://www.winperl.com/
winperl.com Software Test Tools
WinPerl++/GUIdo - A suite of tools written to solve the need for a Windows UI automation tool with an easy to learn and use scripting
language. For this reason, the Perl programming language was chosen. Winperl is a collection of windows dll's, Perl modules, developer UI
and related tools which make its use in various environments possible. The WinPerl++ a.k.a GUIdo tool suite is ideal for the following
purposes:
- Windows UI application automation.
- Corporate SQA application test efforts.
- Ideal for automating repetitive tasks.
- IT functions to eliminate human interaction.
Candela Technologies
Candela Technologies provides powerful and affordable Ethernet traffic generation equipment featuring large port counts in a compact form
factor.
Candela Website http://www.candelatech.com/
Candela Technologies Software Test Tools
LANforge FIRE - LANforge now has better support of Microsoft Windows operating systems. Please see the bottom of this file for details on
the supported features. LANforge on Linux is still the most precise, featureful, and highest performing option.
Seapine Software
Seapine's software development and testing tools streamline your development process, saving you significant time and money. Enjoy
feature-rich tools that are flexible enough to work in any software development environment. With Seapine integrated tools, every step in
the development process feeds critical information into the next step, letting you focus on developing high quality software in less time.
Seapine Website http://www.seapine.com/
Seapine Software Software Test Tools
QA Wizard - Incorporating a user friendly interface with integrated data and a robust scripting engine. It adapts easily to new and evolving
technologies and can be mastered in a short period of time. The result--you deliver higher quality software faster.
• Web based access allows your users to access the database from anywhere.
• Available as a hosted solution or a download for local installation, whichever suits your needs best.
• Completely customizable issue fields, workflows, data entry forms, user groups and projects let you manage your data, your way.
• Carefully designed to be user friendly and intuitive so there is little or no training required for end users. Each screen has context
sensitive help and full user guides are available from every help page.
• Integrated screen capture tool allows for easy one-click capture of screen shots
BugTracker from Applied Innovation Management - http://www.bugtracking.com/ - The complete bug tracking software solution for bug,
defect, feature and request tracking. Your one stop shop solution for web based bug tracking software. With 12 years history, over 500
installed sites, a million users and a 30 day money back guarantee you can't go wrong! Call us for a demo today!
DefectTracker from Pragmatic Software - http://www.defecttracker.com - Defect Tracker is a fully web-based defect tracking and support
ticket system that manages issues and bugs, customer requirements, test cases, and allows team members to share documents.
PR-Tracker from Softwise Company - http://www.prtracker.com/ - PR-Tracker is an enterprise level problem tracking system designed
especially for bug tracking. PR-Tracker is easy to use and setup. It has a sensible default configuration so you can begin tracking bugs right
away while configuring the software on the fly to suit your special needs. Features classification, assignment, sorting, searching, reporting,
access control, user permissions, attachments, email notification and much more. Softwise Company
TestTrack Pro from Seapine Software - http://www.seapine.com/ttpro.html - TestTrack Pro delivers time-saving issue management
features that keep all team members informed and on schedule. Its advanced configurability and scalability make it the most powerful
solution at the best value. Move ahead of your competition by moving up to TestTrack Pro.
Bugzilla from Mozilla Organization - http://www.bugzilla.com/ - Bugzilla is a "Defect Tracking System" or "Bug-Tracking System". Defect
Tracking Systems allow individual or groups of developers to keep track of outstanding bugs in their product effectively. Most commercial
defect-tracking software vendors charge enormous licensing fees. Despite being "free", Bugzilla has many features its expensive
counterparts lack. Consequently, Bugzilla has quickly become a favorite of hundreds of organizations across the globe.
BugCollector - http://www.nesbit.com/ - BugCollector Pro 3.0 is a multiuser database specifically designed for keeping track of software
bugs and feature requests. With it, you can track bugs from first report through resolution and feature requests from initial contact through
implementation.
ProblemTracker - http://www.netresultscop.com/fs_pbtrk_info.html - ProblemTracker is a powerful, easy-to-use Web-based tool for defect
tracking and change management problemTracker delivers the benefits of automated bug tracking to any desktop in a familiar Web browser
interface, as a price every organization can afford.
ClearQuest - http://www.rational.com/ - ClearQuest is flexible defect tracking/change request management system for tracking and
reporting on defects.
SWBTracker - http://www.softwarewithbrains.com/suntrack.htm - SWBTracker supports concurrent multiuser licensing at extremely
competitive price, as well as many of the most important festures, developers and testers are looking for in today's bug tracking spftware:
automatic email notifications with customizable message templates , complete issue life cycle tracking with automaticchange history
logging, custom report designer, and many built-in summary and detail reports.
Elementool - http://www.elementool.com/ - Elementool is an application service provider for Web-based software bug tracking and support
management tools. Elementool provides its services to software companies and business Web sites all over the world.
Part III - Software Test Automation Tool Evaluation Criteria
Ease of Use
• Learning curve
• Easy to maintain the tool
• Easy to install--tool may not be used if difficult to install
Tool Customization
Platform Support
• Can it be moved and run on several platforms at once, across a network (that is, cross-Windows support, Win95, and WinNT)?
Multiuser Access
• What database does the tool use? Does it allow for scalability?
• Network-based test repository--necessary when multiple access to repository is required
Defect Tracking
Tool Functionality
• Test scripting language--does the tool use a flexible, yet robust scripting language? What is the complexity of the scripting
language: Is it 4 GL? Does it allow for modular script development?
• Complexity of scripting language
• Scripting language allows for variable declaration and use; allows passing of parameters between functions
• Does the tool use a test script compiler or an interpreter?
• Interactive test debugging--does the scripting language allow the user to view variable values, step through the code, integrate
test procedures, or jump to other external procedures?
• Does the tool allow recording at the widget level (object recognition level)?
• Does the tool allow for interfacing with external .dll and .exe files?
• Published APIs--language interface capabilities
• ODBC support--does the tool support any ODBC-compliant database?
• Is the tool intrusive (that is, does source code need to be expanded by inserting additional statements)?
• Communication protocols--can the tool be adapted to various communication protocols (such as TCP/IP, IPX)?
• Custom control support--does the tool allow you to map to additional custom controls, so the tool is still compatible and usable?
• Ability to kick off scripts at a specified time; scripts can run unattended
• Allows for adding timers
• Allows for adding comments during recording
• Compatible with the GUI programming language and entire hardware and software development environment used for application
under test (i.e., VB, Powerbuilder)
• Can query or update test data during playback (that is, allows the use of SQL statements)
• Supports the creation of a library of reusable function
• Allows for wrappers (shells) where multiple procedures can be linked together and are called from one procedure
• Test results analysis--does the tool allow you to easily see whether the tests have passed or failed (that is, automatic creation of
test results log)?
• Test execution on script playback--can the tool handle error recovery and unexpected active windows, log the discrepancy, and
continue playback (automatic recovery from errors)?
• Allows for synchronization between client and server
• Allows for automatic test procedure generation
• Allows for automatic data generation
Reporting Capability
• Performance and stress testing tool is integrated with GUI testing tool
• Supports stress, load, and performance testing Allows for simulation of users without requiring use of physical workstations
• Ability to support configuration testing (that is, tests can be run on different hardware and software configurations)
• Ability to submit a variable script from a data pool of library of scripts/data entries and logon IDs/password
• Supports resource monitoring (memory, disk space, system resources)
• Synchronization ability so that a script can access a record in database at the same time to determine locking, deadlock
conditions, and concurrency control problems
• Ability to detect when events have completed in a reliable fashion
• Ability to provide client to server response times
• Ability to provide graphical results
• Ability to provide performance measurements of data loading
Version Control
• Test planning and management tool is integrated with requirements management tool
• Test planning and management tool follows specific industry standard on testing process (such as SEI/CMM, ISO)
• Supports test execution management
• Allows for test planning--does the tool support planning, managing, and analyzing testing efforts? Can the tool reference test
plans, matrices, and product specifications to create traceability?
• Allows for measuring test progress
• Allows for various reporting activities
Pricing
Vendor Qualifications
• Maturity of product
• Market share of product
• Vendor qualifications, such as financial stability and length of existence. What is the vendor's track record?
• Are software patches provided, if deemed necessary?
• Are upgrades provided on a regular basis?
• Customer support
• Training is available
• Is a tool Help feature available? Is the tool well documented?
• Availability and access to tool user groups
Test Identification
This ADTP is intended to provide information for System/Integration Testing for the PRODUCT NAME module of the PROJECT NAME. The
test effort may be referred to by its PROJECT REQUEST (PR) number and its project title for tracking and monitoring of the testing
progress.
Entry Criteria
The ADTP is complete, excluding actual test results. The ADTP has been signed-off by appropriate sponsor representatives indicating
consent of the plan for testing. The Problem Tracking and Reporting tool is ready for use. The Change Management and Configuration
Management rules are in place.
The environment for testing, including databases, application programs, and connectivity has been defined, constructed, and verified.
Exit Criteria
In establishing the exit/acceptance criteria for the Automated Testing during the System/Integration Phase of the test, the Project
Completion Criteria defined in the Project Definition Document (PDD) should provide a starting point. All automated test cases have been
executed as documented. The percent of successfully executed test cases met the defined criteria. Recommended criteria: No Critical or
High severity problem logs remain open and all Medium problem logs have agreed upon action plans; successful execution of the
application to validate accuracy of data, interfaces, and connectivity.
Pass/Fail Criteria
The results for each test must be compared to the pre-defined expected test results, as documented in the ADTP (and DTP where
applicable). The actual results are logged in the Test Case detail within the Detail Test Plan if those results differ from the expected results.
If the actual results match the expected results, the Test Case can be marked as a passed item, without logging the duplicated results.
A test case passes if it produces the expected results as documented in the ADTP or Detail Test Plan (manual test plan). A test case fails if
the actual results produced by its execution do not match the expected results. The source of failure may be the application under test, the
test case, the expected results, or the data in the test environment. Test case failures must be logged regardless of the source of the failure.
Any bugs or problems will be logged in the DEFECT TRACKING TOOL.
The responsible application resource corrects the problem and tests the repair. Once this is complete, the tester who generated the
problem log is notified, and the item is re-tested. If the retest is successful, the status is updated and the problem log is closed.
If the retest is unsuccessful, or if another problem has been identified, the problem log status is updated and the problem description is
updated with the new findings. It is then returned to the responsible application personnel for correction and test.
Severity Codes are used to prioritize work in the test phase. They are assigned by the test group and are not modifiable by any other group.
The following standard Severity Codes to be used for identifying defects are:
Table 1 Severity Codes
Severity
Severity
Code Description
Code Name
Number
1. Critical Automated tests cannot proceed further within applicable test case (no work around)
2. High The test case or procedure can be completed, but produces incorrect output when valid information is input.
The test case or procedure can be completed and produces correct output when valid information is input, but
produces incorrect output when invalid information is input.(e.g. no special characters are allowed as part of
3. Medium
specifications but when a special character is a part of the test and the system allows a user to continue, this is a
medium severity)
All test cases and procedures passed as written, but there could be minor revisions, cosmetic changes, etc.
4. Low
These defects do not impact functional execution of system
The use of the standard Severity Codes produces four major benefits:
• Standard Severity Codes are objective and can be easily and accurately assigned by those executing the test. Time spent in
discussion about the appropriate priority of a problem is minimized.
• Standard Severity Code definitions allow an independent assessment of the risk to the on-schedule delivery of a product that
functions as documented in the requirements and design documents.
• Use of the standard Severity Codes works to ensure consistency in the requirements, design, and test documentation with an
appropriate level of detail throughout.
• Use of the standard Severity Codes promote effective escalation procedures.
Test Scope
The scope of testing identifies the items which will be tested and the items which will not be tested within the System/Integration Phase of
testing.
Items to be tested by Automation (PRODUCT NAME ...)
Items not to be tested by Automation(PRODUCT NAME ...)
Test Approach
Description of Approach
The mission of Automated Testing is the process of identifying recordable test cases through all appropriate paths of a website, creating
repeatable scripts, interpreting test results, and reporting to project management. For the Generic Project, the automation test team will
focus on positive testing and will complement the manual testing undergone on the system. Automated test results will be generated,
formatted into reports and provided on a consistent basis to Generic project management.
System testing is the process of testing an integrated hardware and software system to verify that the system meets its specified
requirements. It verifies proper execution of the entire set of application components including interfaces to other applications. Project
teams of developers and test analysts are responsible for ensuring that this level of testing is performed.
Integration testing is conducted to determine whether or not all components of the system are working together properly. This testing
focuses on how well all parts of the web site hold together, whether inside and outside the website are working, and whether all parts of the
website are connected. Project teams of developers and test analyst are responsible for ensuring that this level of testing is performed.
For this project, the System and Integration ADTP and Detail Test Plan complement each other.
Since the goal of the System and Integration phase testing is to identify the quality of the structure, content, accuracy and consistency,
response time and latency, and performance of the application, test cases are included which focus on determining how well this quality
goal is accomplished.
Content testing focuses on whether the content of the pages match what is supposed to be there, whether key phrases exist continually in
changeable pages, and whether the pages maintain quality content from version to version.
Accuracy and consistency testing focuses on whether today’s copies of the pages download the same as yesterday’s, and whether the data
presented to the user is accurate enough.
Response time and latency testing focuses on whether the web site server responds to a browser request within certain performance
parameters, whether response time after a SUBMIT is acceptable, or whether parts of a site are so slow that the user discontinues working.
Although Loadrunner provides the full measure of this test, there will be various AD HOC time measurements within certain Winrunner
Scripts as needed.
Performance testing (Loadrunner) focuses on whether performance varies by time of day or by load and usage, and whether performance
is adequate for the application.
Completion of automated test cases is denoted in the test cases with indication of pass/fail and follow-up action.
Test Definition
This section addresses the development of the components required for the specific test. Included are identification of the functionality to be
tested by automation, the associated automated test cases and scenarios. The development of the test components parallels, with a slight
lag, the development of the associated product components.
1. Write and receive approval of the ADTP from Generic Project management
2. Manually test the cases in the plan to make sure they actually work before recording repeatable scripts
3. Record appropriate scripts and file them according to the naming conventions described within this document
4. Initial order of automated script runs will be to load GUI Maps through a STARTUP script. After the successful run of this script,
scripts testing all paths will be kicked off. Once an appropriate number of PNR’s are generated, GenericCancel scripts will be
used to automatically take the inventory out of the test profile and system environment. During the automation test period,
requests for testing of certain functions can be accommodated as necessary as long as these functions have the ability to be
tested by automation.
5. The ability to use Generic Automation will be READ ONLY for anyone outside of the test group. Of course, this is required to
maintain the pristine condition of master scripts on our data repository.
6. Generic Test Group will conduct automated tests under the rules specified in our agreement for use of the Winrunner tool
marketed by Mercury Interactive.
7. Results filed for each run will be analyzed as necessary, reports generated, and provided to upper management.
Test Issues and Risks
Issues
The table below lists known project testing issues to date. Upon sign-off of the ADTP and Detail Test Plan, this table will not be maintained,
and these issues and all new issues will be tracked through the Issue Management System, as indicated in the projects approved Issue
Management Process
Issue Impact Target Date for Resolution Owner
COMPANY NAME test team is not in possession of Testing may not cover some Beginning of Automated Testing
CUSTOMER TO
market data regarding what browsers are most in use in browsers used by CLIENT during System and Integration
PROVIDE
CUSTOMER target market. customers Test Phase
OTHER . . .
Risks
Risks
The table below identifies any high impact or highly probable risks that may impact the success of the Automated testing process.
Risk Assessment Matrix
Difficulty of Timely Overall Threat(H,
Risk Area Potential Impact Likelihood of Occurrence
Detection M, L)
1. Unstable
Delayed Start HISTORY OF PROJECT Immediately .
Environment
2. Quality of Unit Greater delays taken by Dependent upon quality standards of
Immediately .
Testing automated scripts development group
3. Browser Issues Intermittent Delays Dependent upon browser version Immediately .
Risk Management Plan
Risk Area Preventative Action Contingency Plan Action Trigger Owner
1. Meet with Environment Group . . . .
2. Meet with Development Group . . . .
3. . . . .
Traceability Matrix
The purpose of the Traceability Matrix is to identify all business requirements and to trace each requirement through the project's
completion.
Each business requirement must have an established priority as outlined in the Business Requirements Document.
They are:
Essential - Must satisfy the requirement to be accepted by the customer.
Useful - Value -added requirement influencing the customer's decision.
Nice-to-have - Cosmetic non-essential condition, makes product more appealing.
The Traceability Matrix will change and evolve throughout the entire project life cycle. The requirement definitions, priority, functional
requirements, and automated test cases are subject to change and new requirements can be added. However, if new requirements are
added or existing requirements are modified after the Business Requirements document and this document have been approved, the
changes will be subject to the change management process.
The Traceability Matrix for this project will be developed and maintained by the test coordinator. At the completion of the matrix definition
and the project, a copy will be added to the project notebook.
Test Case
A test case is a transaction or list of transactions that will satisfy the requirements statement in a test scenario. The test case must contain
the actual entries to be executed as well as the expected results, i.e., what a user entering the commands would see as a system response.
Test Procedure
Test procedures define the activities necessary to execute a test case or set of cases. Test procedures may contain information regarding
the loading of data and executables into the test system, directions regarding sign in procedures, instructions regarding the handling of test
results, and anything else required to successfully conduct the test.
Installation Configuration
Install Step: Selection: Completed:
Installations Components Full
Destination Directory C:\sqa6
Type Of Repository Microsoft Access
Scripting Language SQA Basic only
Test Station Name Your PC Name
DLL messages Overlay all DLL's the system prompts for. Robot will not run without its own DLL's.
Repository Creation
Item Information
Repository Name
Location
Mapped Drive Letter
Project Name
Users set up for Project Admin - no password
Sbh files used in projects scripts
Client Setup Options for the SQA Robot tool
Option Window Option Selection
Recording ID list selections by Contents
ID Menu selections by Text
Record unsupported mouse drags as Mouse click if within object
Window positions Record Object as text Auto record window size
While Recording Put Robot in background
Playback Test Procedure Control Delay Between :5000 milliseconds
Partial Window Caption On Each window search
Caption Matching options Check - Match reverse captions Ignore file extensions Ignore Parenthesis
Test Log Test log Management Output Playback results to test log All details
Update SQA repository View test log after playback
Test Log Data Specify Test Log Info at Playback
Unexpected Window Detect Check
Capture Check
Playback response Select pushbutton with focus
On Failure to remove Abort playback
Wait States Wait Pos/Neg Region Retry - 4 Timeout after 90
Automatic wait Retry - 2 Timeout after 120
Keystroke option Playback delay 100 millsec Check record delay after enter key
Error Recovery On Script command Failure Abort Playback
On test case failure Continue Execution
SQA trap Check all but last 2
Object Recognition Do not change
Object Data Test Definitions Do not change
Editor Leave with defaults
Preferences Leave with defaults
Identify what Servers and Databases the automation will run against.
This {Project name} will use the following Servers:
{Add servers}
On these Servers it will be using the following Databases:
{Add databases}
1. Use assisting scripts to open and close applications and activity windows.
2. Use global constants to pass data into scripts and between scripts.
3. Make use of main menu selections over using double clicks, toolbar items and pop up menus whenever possible.
4. Each test procedure should have a manual test plan associated with it.
5. Do not Save in the test procedure unless it is absolutely necessary, this will prevent the need to write numerous clean up scripts.
6. Do a window existence test for every window you open, this will prevent scripts dying from slow client/server calls.
7. Do not use the mouse for drop down selections, whenever possible use hotkeys and the arrow keys.
8. When navigating through a window use the tab and arrow keys instead of using a mouse, this will make maintenance of scripts due to UI
changes easier in the future.
9. Create a template header file called testproc.tpl. This file will insert template header information on the top of all scripts recorded. This
template area can be used for modification tracking and commenting on the script.
10. Comment all major selections or events in the script. This will make debugging easier.
11. Make sure that you maximize all MDI main windows in login initial scripts.
12. When recording make sure you begin and end your scripts in the same position. Ex. On the platform browser always start your script
opening the browser tree and selecting your activity (this will ensure that the activity window will always be in the same position), likewise
always end your scripts with collapsing the browser tree.
bug
A fault in a program which causes the program to perform in an unintended or unanticipated manner.
Defect:
Anything that does not perform as specified. This could be hardware, software, network, performance, format, or functionality.
Defect risk
The process of identifying the amount of risk the defect could cause. This will assist in determining if the defect can go undetected into
implementation.
Defect log:
A log or database of all defects that were uncovered during the testing and maintenance phase of development. It categorizes defects into
severity and similarity in an attempt to identify areas requiring special attention.
bug:
An informal word describing any of the above.
Deviation from the expected result.
A software bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working as intended, or produces an
incorrect result. Bugs arise from mistakes and errors, made by people, in either a program's source code or its design. It is said that there
are bugs in all useful computer programs, but well-written programs contain relatively few bugs, and these bugs typically do not prevent the
program from performing its task. A program that contains a large number of bugs, and/or bugs that seriously interfere with its functionality,
is said to be buggy. Reports about bugs in a program are referred to as bug reports, also called PRs (problem reports), trouble reports, CRs
(change requests), and so forth.
Defect:
Problem in algorithm leads to failure.
A defect is for something that normally works, but it has something out-of-spec.
Bug Impacts
Low impact
This is for Minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors in
layout/formatting. These problems do not impact use of the product in any substantive way.
Medium impact
This is a problem that a) Effects a more isolated piece of functionality. b) Occurs only at certain boundary conditions. c) Has a workaround
(where "don't do that" might be an acceptable answer to the user). d) Occurs only at one or two customers. or e) Is very intermittent
High impact
This should be used for only serious problems, effecting many sites, with no workaround. Frequent or reproducible crashes/core
dumps/GPFs would fall in this category, as would major functionality not working.
Urgent impact
This should be reserved for only the most catastrophic of problems. Data corruption, complete inability to use the product at almost any site,
etc. For released products, an urgent bug would imply that shipping of the product should stop immediately, until the problem is resolved.
Error rate:
The mean time between errors. This can be a statistical value between any errors or it could be broken down into the rate of occurrence
between similar errors. Error rate can also have a perception influence. This is important when identifying the "good-enough" balance of the
error. In other words, the mean time between errors is greater than the ultimate user will accept.
Issue log:
A log kept of all issues raised during the development process. This could contain problem uncoverd, impact of changes to specifications or
the loss of a key individual. It is anything that must be tracked and monitored.
Priority
Priority is Business.
Priority is a measure of importance of getting the defect fixed as governed by the impact to the application, number of users affected, and
company's reputation, and/or loss of money.
Priority levels:
• Now: drop everything and take care of it as soon as you see this (usually for blocking bugs)
• P1: fix before next build to test
• P2: fix before final release
• P3: we probably won’t get to these, but we want to track them anyway
Priority levels
1. Must fix as soon as possible. Bug is blocking further progress in this area.
2. Should fix soon, before product release.
3. Fix if time; somewhat trivial. May be postponed.
Priority levels
• High: This has a major impact on the customer. This must be fixed immediately.
• Medium: This has a major impact on the customer. The problem should be fixed before release of the current version in
development, or a patch must be issued if possible.
• Low: This has a minor impact on the customer. The flaw should be fixed if there is time, but it can be deferred until the next
release.
Severity
Severity is Technical.
Severity is a measure of the impact of the defect on the overall operation of the application being tested.
Severity level:
The degree of impact the issue or problem has on the project. Severity 1 usually means the highest level requiring immediate attention.
Severity 5 usually represents a documentation defect of minimal impact.
Severity is levels:
Severity levels
Severity levels
• High: A major issue where a large piece of functionality or major system component is completely broken. There is no
workaround and testing cannot continue.
• Medium: A major issue where a large piece of functionality or major system component is not working properly. There is a
workaround, however, and testing can continue.
• Low: A minor issue that imposes some loss of functionality, but for which there is an acceptable and easily reproducible
workaround. Testing can proceed without interruption.
S2 - Medium/Workaround. Exist like when a problem is required in the specs but tester can go on with testing. Incident affects an area of
functionality but there is a work-around which negates impact to business process. This is a problem that:
a) Affects a more isolated piece of functionality.
b) Occurs only at certain boundary conditions.
c) Has a workaround (where "don't do that" might be an acceptable answer to the user).
d) Occurs only at one or two customers. or is intermittent
S3 - Low. This is for minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor
errors in
layout/formatting. Problems do not impact use of the product in any substantive way. These are incidents that are cosmetic in nature and of
no or very low impact to business processes.
Triage Meetings (Bug Councils)
Bug Triage Meetings are project meetings in which open bugs are divided into categories.
• Check if the client operating system(OS) version and patches meet system requirements.
• Check if the correct version of the browser is installed on the client machine.
• Check if the browser is properly installed on the matche.
• Check the browser settings.
• Check with different browsers (e.g., Netscape Navigator versus internet Explorer).
• Check with different supported versions of the same browsers(e.g.3.1,3.2,4.2,4.3, etc).
• Ranges of numbers (such as all numbers between 10 and 99, which are of the same two-digit equivalence class)
• Membership in groups (dates, times, country names, ete.)
• Invalid inputs (placing symbols into text-only fields, etc)
• Equivalent output events (variation of inputs that produce the same output)
• Equivalent operating environments
• Repetition of activities
• Number of records in a database (or other equivalent objects)
• Equivalent sums or other arithmetic results
• Equivalent numbers of items entered (such as the number of characters enterd into a field)
• Equivalent space (on a page or on a screen)
• Equivalent amount of memory, disk space, or other resources available to a program.
Boundary values mark the transition points between equivalence clases. They can be limit values that define the line between supported
inputs and nonsupported inputs,or they can define the line between supported system requirements and nonsupported system
requirements. Applications are more susceptible to errors at the boundaries of equivalence classs, so boundary condition tests can be quite
effective at uncovering errors.
Generally, each equivalence class is partitioned by its boundary vakues. Nevertheless, not all equivalence classs have boundaries. For
example, given the following four browser equivalent classes(NETSCAPE 4.6.1 and Microsoft Internet Explorer 4.0 and 5.0), thereis no
boundary defined among each class.
Each equivalence class represents potential risk. Under the equivalent class approach to developing test cases, at most, nine test cases
should be executed against each partition.
Defect/problem documentation
A defect tracking/problem reporting system should provide:
• Standardized
• Inputs
• Expected Results
• Actual Results
• Anomalies
• Date
• Time
• Procedure Step
• Environment
• Attempts To Repeat
• Testers
• Observers
• Non-Standardized
• Defect ID
• Priority
• Severity
• Test Cycle
• Test Procedure
• Test Case
• Occurrences
• Test Requirement
• Person Reporting
• Defect Status
• Defect Action
• Defect Description
• Defect Symptom
• Found In Build
• Software Module
• Module Description
• Related modules
• Person Assigned
• Date Assigned
• Estimated Time to Fix
• Resolution
• Resolution Description
• Fix Load Date
• Fix Load Number
• Repaired in Build
• Date Closed
• Contact Person
• Attachments
• Rework Cycles
• Owner
• Work Around
• Person Investigating
• Emergence/Scheduled
• Programming Time
• Process or Product
• Customized defect/problem reporting data fields
• ACD capability
• Predefined queries/reports
• ustom query/reporting
• Free text searching
• Cut and paste
• On-screen report display
• Printed reports
• Support all Network types
• Provide Record Locking
• Provide data recovery
• Support for dial-in access
• An interface to the E-mail system
• Manual notification
• Automatic notification of team members
• Password protection of team members
• Limited access to functions based on user type
Write bug reports immediately; the longer you wait between finding the problem and reporting it, the more likely it is the description will be
incomplete, the problem not reproducible, or simply forgotten.
Writing a one-line report summary (Bug's report title) is an art. You must master it. Summaries help everyone quickly review outstanding
problems and find individual reports. The summary line is the most frequently and carefully read part of the report. When a summary makes
a problem sound less severe than it is, managers are more likely to defer it. Alternatively, if your summaries make problems sound more
severe than they are, you will gain a reputation for alarmism. Don't use the same summary for two different reports, even if they are similar.
The summary line should describe only the problem, not the replication steps. Don't run the summary into the description (Steps to
reproduce) as they will usually be printed independently of each other in reports.
Ideally you should be able to write this clearly enough for a developer to reproduce and fix the problem, and another QA engineer to verify
the fix without them having to go back to you, the author, for more information. It is much better to over communicate in this field than say
too little. Of course it is ideal if the problem is reproducible and you can write down those steps. But if you can't reproduce a bug, and try
and try and still can't reproduce it, admit it and write the report anyway. A good programmer can often track down an irreproducible problem
from a careful description. For a good discussion on analyzing problems and making them reproducible, see Chapter 5 of Testing Computer
Software by Cem Kaner.
The most controversial thing in a bug report is often the bug Impacts: Low, Medium, High, and Urgent. Report should show the priority
which you, the bug submitter, believes to be appropriate and does not get changed.
Bug Report Components
Report number:
Unique number given to a bug.
Problem Summary:
(data entry field that's one line) precise to what the problem is.
Report Type:
Describes the type of problem found, for example it could be software or hardware bug.
Severity:
Normally, how you view the bug.
Various levels of severity: Low - Medium - High - Urgent
Environment:
Environment in which the bug is found.
Detailed Description:
Detailed description of the bug that is found
How to reproduce:
Detailed description of how to reproduce the bug.
Reported by:
The name of person who writes the report.
Assigned to developer:
The name of developer who assigned to fixed the bug.
Status:
Open:
The status of bug when it entered.
Fixed / feedback:
The status of the bug when it fixed.
Closed:
The status of the bug when verified.
(Bug can be only closed by QA person. Usually, the problem is closed by QA manager.)
Deferred:
The status of the bug when it postponed.
User error:
The status of the bug when user made an error.
Not a bug:
The status of the bug when it is not a bug.
Priority:
Assigned by the project manager who asks the programmers to fix bugs in priority order.
Resolution:
Defines the current status of the problem. There are four types of resolution such as deferred, not a problem, will not fix, and as designed
Defect(bug) report:
An incident report defining the type of defect and the circumstances in which they occurred. (defect tracking system)
The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and
determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-
tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools
are available. The following are items to consider in the tracking process:
• Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.
• Bug identifier (number, ID, etc.)
• Current bug status (e.g., 'Released for Retest', 'New', etc.)
• The application name or identifier and version
• The function, module, feature, object, screen, etc. where the bug occurred
• Environment specifics, system, platform, relevant hardware specifics
• Test case name/number/identifier
• One-line bug description
• Full bug description
• Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to
the test case/test script/test tool
• Names and/or descriptions of file/data/messages/etc. used in test
• File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem
• Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
• Was the bug reproducible?
• Tester name
• Test date
• Bug reporting date
• Name of developer/group/organization the problem is assigned to
• Description of problem cause
• Description of fix
• Code section/file/module/class/method that was fixed
• Date of fix
• Application version that contains the fix
• Tester responsible for retest
• Retest date
• Retest results
• Regression testing requirements
• Tester responsible for regression tests
• Regression testing results A reporting or tracking process should enable notification of appropriate personnel at various stages.
For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the
needed information, and reporting/summary capabilities are needed for managers.
Defects can be logged by using Tools like Eg Siebel, Track, PVCS etc….and it can also be logged by documenting the defects and
maintaining the document in repository. Testers should write the defect description efficiently which will be useful for others within a project.
And the documentation should be transparent.
Abstract of a Defect
Testers should only specify a brief description of a defect
Following with an Observation like (Eg, System displays an error message Eg: “Unable to update the record “. But according to functionality
system should update the updated record). And it will be great if a tester specifies few more observation points like:
- This particular defect occurs in a Particular version (Eg Adobe versions for a Report.)
- This particular defect also found in other modules
- Inconsistency of application while reproducing the defect (Eg, some times able to reproduce and sometimes not)
Conclusion
By giving the brief description of defect the
· Easy to analyze and cause of defect.
· Easy to fix the defect
· Avoid re-work
· Testers can save time
· Defect duplication can be avoided.
· Keeping track for defects
Preventing bugs
It can be psychologically difficult for some engineers to accept that their design contains bugs. They may hide behind euphemisms like
"issues" or "unplanned features". This is also true of corporate software where a fix for a bug is often called "a reliability enhancement".
Bugs are a consequence of the nature of the programming task. Some bugs arise from simple oversights made when computer
programmers write source code carelessly or transcribe data incorrectly. Many off-by-one errors fall into this category. Other bugs arise
from unintended interactions between different parts of a computer program. This happens because computer programs are often complex,
often having been programmed by several different people over a great length of time, so that programmers are unable to mentally keep
track of every possible way in which different parts can interact (the so-called hrair limit). Many race condition bugs fall into this category.
The computer software industry has put a great deal of effort into finding methods for preventing programmers from inadvertently
introducing bugs while writing software. These include:
Programming techniques
Bugs often create inconsistencies in the internal data of a running program. Programs can be written to check the consistency of their own
internal data while running. If an inconsistency is encountered, the program can immediately halt, so that the bug can be located and fixed.
Alternatively, the program can simply inform the user, attempt to correct the inconsistency, and continue running.
Development methodologies
There are several schemes for managing programmer activity, so that fewer bugs are produced. Many of these fall under the discipline of
software engineering (which addresses software design issues as well.) For example, formal program specifications are used to state the
exact behavior of programs, so that design bugs can be eliminated. Programming language support
Programming languages often include features which help programmers deal with bugs, such as exception handling. In addition, many
recently-invented languages have deliberately excluded features which can easily lead to bugs. For example, the Java programming
language does not support pointer arithmetic.
Debugging
Finding and fixing bugs, or "debugging", has always been a major part of computer programming. Maurice Wilkes, an early computing
pioneer, described his realization in the late 1940s that much of the rest of his life would be spent finding mistakes in his own programs. As
computer programs grow more complex, bugs become more common and difficult to fix. Often programmers spend more time and effort
finding and fixing bugs than writing new code.
Usually, the most difficult part of debugging is locating the erroneous part of the source code. Once the mistake is found, correcting it is
usually easy. Programs known as debuggers exist to help programmers locate bugs. However, even with the aid of a debugger, locating
bugs is something of an art.
Typically, the first step in locating a bug is finding a way to reproduce it easily. Once the bug is reproduced, the programmer can use a
debugger or some other tool to monitor the execution of the program in the faulty region, and (eventually) find the problem. However, it is
not always easy to reproduce bugs. Some bugs are triggered by inputs to the program which may be difficult for the programmer to re-
create. One cause of the Therac-25 radiation machine deaths was a bug that occurred only when the machine operator very rapidly entered
a treatment plan; it took days of practice to become able to do this, so the bug did not manifest in testing or when the manufacturer
attempted to duplicate it. Other bugs may disappear when the program is run with a debugger; these are heisenbugs (humorously named
after the Heisenberg uncertainty principle.)
Debugging is still a tedious task requiring considerable manpower. Since the 1990s, particularly following the Ariane 5 Flight 501 disaster,
there has been a renewed interest in the development of effective automated aids to debugging. For instance, methods of static analysis by
abstract interpretation have already made significant achievements, while still remaining much of a work in progress
Common types of computer bugs (1)
* Divide by zero
* Infinite loops
* Arithmetic overflow or underflow
* Exceeding array bounds
* Using an uninitialized variable
* Accessing memory not owned (Access violation)
* Memory leak or Handle leak
* Stack overflow or underflow
* Buffer overflow
* Deadlock
* Off by one error
* Race hazard
* Loss of precision in type conversion
The standard is divided into four parts which adresses, respectively, the following subjects: quality model; external metrics; internal metrics;
and quality in use metrics.
The quality model stablished in the first part of the standard, ISO 9126-1, classifies software quality in a structured set of factors as follows:
* Functionality - A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that
satisfy stated or implied needs.
o Suitability
o Accuracy
o Interoperability
o Compliance
o Security
* Reliability - A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a
stated period of time.
o Maturity
o Recoverability
o Fault Tolerance
* Usability - A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied
set of users.
o Learnability
o Understandability
o Operability
* Efficiency - A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources
used, under stated conditions.
o Time Behaviour
o Resource Behaviour
* Maintainability - A set of attributes that bear on the effort needed to make specified modifications.
o Stability
o Analysability
o Changeability
o Testability
* Portability - A set of attributes that bear on the ability of software to be transferred from one environment to another.
o Installability
o Conformance
o Replaceability
o Adaptability
The sub-characteristic Conformance is not listed above and applies to all characteristics. Examples are conformance to legislation
concerning Usability or Reliability.
Each quality sub-characteristic (as adaptability) is further divided into attributes. An attribute is an entity which can be verified or measured
in the software product. Attributes are not defined in the standard, as they vary between different software products.
Software product is defined in a broad sense: it encompasses executables, source code, architecture descriptions, and so on. As a result,
the notion of user extends to operators as well as to programmers, which are users of components as software libraries.
The standard provides a framework for organizations to define a quality model for a software product. On doing so, however, it leaves up to
each organization the task of specifying precisely its own model. This may be done, for example, by specifying target values for quality
metrics which evaluates the degree of presence of quality attributes.
Internal metrics are those which does not rely on software execution (static measures).
Quality in use metrics are only available when the final product is used in real conditions.
Ideally, the internal quality determines the external quality and this one determines the results of quality in use.
This standard stems from the model established in 1977 by McCall and his colleagues, who proposed a model to specify software quality.
The McCall quality model is organized around three types of Quality Characteristics:
* Factors (To specify): They describe the external view of the software, as viewed by the users.
* Criteria (To build): They describe the internal view of the software, as seen by the developer.
* Metrics (To control): They are defined and used to provide a scale and method for measurement.
ISO 9126 distinguishes between a defect and a nonconformity, a defect being The nonfulfilment of intended usage requirements, whereas a
nonconformity is The nonfulfilment of specified requirements. A similar distinction is made between validation and verification, known as
V&V in the testing trade.
Glitch City, a Pokémon programming error that creates a jumble of pixels.
A glitch is a short-lived fault in a system. The term is particularly common in the computing and electronics industries, and in circuit bending,
as well as among players of video games, although it is applied to all types of systems including human organizations. The term derives
from the German glitschen, meaning 'to slip.'
In electronics, a glitch is an electrical pulse of short duration that is usually the result of a fault or design error, particularly in a digital circuit.
For example, many electronic components such as flip-flops are triggered by a pulse that must not be shorter than a specified minimum
duration, otherwise the component may malfunction. A pulse shorter than the specified minimum is called a glitch. A related concept is the
runt pulse, a pulse whose amplitude is smaller than the minimum level specified for correct operation, and a spike, a short pulse similar to
glitch but often caused by ringing or crosstalk.
In video games, a glitch is a term used by players to indicate a bug or programming error of some sort. It may refer to either a helpful or
harmful error, but never an intended behavior. A programming error that makes the game freeze is often referred to as a glitch, as is an
error that, for example, gives the player 100 lives automatically. The occurrence of some glitches can be replicated deliberately by doing
certain tasks in a certain order. For example, the Missingno., 'M, and Glitch City glitches in the Pokémon series follow this principle. The
Mew glitch also works on the same principle.
The practise of exploiting glitches in video games is known as "glitching." For example, in an online game someone may use an error in the
map to get an advantage. This is sometimes considered cheating, but sometimes just considered part of the game. It is often against a
game's TOS (Terms of Service) and will be punished if discovered.
Sometimes glitches will be mistaken for hidden features. In the arcade version of Mortal Kombat, a rare glitch occasionally caused two
characters to be mixed together. Most often, these were ninja characters, resulting in a semi-red ninja character with the name "ERMAC"
(short for "error machine"). Upon discovering this, many players believed they had uncovered a secret character, when in fact they had only
uncovered a programming bug. Due to the rumors surrounding the glitch, Midway did eventually include a red ninja character named Ermac
as an official character in Ultimate Mortal Kombat 3, and he has subsequently appeared in other Mortal Kombat games, becoming an
instant fan favorite.
A workaround is a bypass of a recognized problem in a system. A workaround is typically a temporary fix that implies that a genuine
solution to the problem is needed. Frequently workarounds are as creative as true solutions, involving out-of-the-box thinking in their
creation.
Typically they are considered brittle in that they will not respond well to further pressure from a system beyond the original design. In
implementing a workaround it is important to flag the change so as to later implement a proper solution.
Placing pressure on a workaround may result in later failures in the system. For example, in computer programming workarounds are often
used to address a problem in a library, such as an incorrect return value. When the library is changed, the workaround may break the
overall program functionality, since it may expect the older, wrong behaviour from the library
A bugtracker is a ticket tracking system that is designed especially to manage problems (software bugs) with computer programs.
Typically bug tracking software allows the user to quickly enter bugs and search on them. In addition some allow users to specify a
workflow for a bug that automates a bug's lifecycle.
Most bug tracking software allows the administrator of the system to configure what fields are included on a bug.
Having a bug tracking solution is critical for most systems. Without a good bug tracking solution bugs will eventually get lost or poorly
prioritized.
Bugzilla is a general-purpose bug-tracking tool originally developed and used by the Mozilla Foundation. Since Bugzilla is web-based and is
free software / open-source software, it is also the bug tracking tool of choice for many projects, both open source and proprietary.
Bugzilla relies on an installed web server (such as Apache) and a database management system (such as MySQL or PostgreSQL) to
perform its work. Bugs can be submitted by anybody, and will be assigned to a particular developer. Various status updates for each bug
are allowed, together with user notes and bug examples.
Bugzilla's notion of a bug is very general; for instance, mozilla.org uses it to track feature requests as well.
Requirements
Release notes such as those for Bugzilla 2.20 indicate the exact set of dependencies, which include:
* A compatible database server (often a version of MySQL)
* A suitable release of Perl 5
* An assortment of Perl modules
* A compatible web server such as Apache (though any web server that supports CGI can work)
* A suitable mail transfer agent such as Sendmail, qmail, Postfix, or Exim
Anti-patterns, also referred to as pitfalls, are classes of commonly-reinvented bad solutions to problems. They are studied, as a category, in
order that they may be avoided in the future, and that instances of them may be recognized when investigating non-working systems.
The term originates in computer science, from the Gang of Four's Design Patterns book, which laid out examples of good programming
practice. The authors termed these good methods "design patterns", and opposed them to "anti-patterns". Part of good programming
practice is the avoidance of anti-patterns.
The concept is readily applied to engineering in general, and also applies outside engineering, in any human endeavour. Although the term
is not commonly used outside engineering, the concept is quite universal.
Some recognised computer programming anti-patterns
* Action at a distance: Unexpected interaction between widely separated parts of a system * Accumulate and fire: Setting parameters for
subroutines in a collection of global variables
* Ambiguous viewpoint: Presenting a model (usually OOAD) without specifying its viewpoint
* BaseBean: Inheriting functionality from a utility class rather than delegating to it
* Big ball of mud: A system with no recognisable structure
* Blind faith: Lack of checking of (a) the correctness of a bug fix or (b) the result of a subroutine
* Blob: see God object
* Boat anchor: Retaining a part of a system that has no longer any use
* Busy spin: Consuming CPU while waiting for something to happen, usually by repeated checking instead of proper messaging
* Caching failure: Forgetting to reset an error flag when an error has been corrected
* Checking type instead of interface: Checking that an object has a specific type when only a certain contract is required
* Code momentum: Over-constraining part of a system by repeatedly assuming things about it in other parts
* Coding by exception: Adding new code to handle each special case as it is recognised
* Copy and paste programming: Copying (and modifying) existing code without creating generic solutions
* De-Factoring: The process of removing functionality and replacing it with documentation
* DLL hell: Problems with versions, availability and multiplication of DLLs
* Double-checked locking: Checking, before locking, if this is necessary in a way which may fail with e.g. modern hardware or compilers.
* Empty subclass failure: Creating a (Perl) class that fails the "Empty Subclass Test" by behaving differently from a class derived from it
without modifications
* Gas factory: An unnecessarily complex design
* God object: Concentrating too many functions in a single part of the design (class)
* Golden hammer: Assuming that a favorite solution is universally applicable
* Improbability factor: Assuming that it is improbable that a known error becomes effective
* Input kludge: Failing to specify and implement handling of possibly invalid input
* Interface bloat: Making an interface so powerful that it is too hard to implement
* Hard code: Embedding assumptions about the environment of a system at many points in its implementation
* Lava flow: Retaining undesirable (redundant or low-quality) code because removing it is too expensive or has unpredictable
consequences
* Magic numbers: Including unexplained numbers in algorithms
* Magic pushbutton: Implementing the results of user actions in terms of an inappropriate (insufficiently abstract) interface
* Object cesspool: Reusing objects whose state does not conform to the (possibly implicit) contract for re-use
* Premature optimization: Optimization on the basis of insufficient information
* Poltergeists: Objects whose sole purpose is to pass information to another object
* Procedural code (when another paradigm is more appropriate)
* Race hazard: Failing to see the consequence of different orders of events
* Re-Coupling: The process of introducing unnecessary object dependency
* Reinventing the wheel: Failing to adopt an existing solution
* Reinventing the square wheel: Creating a poor solution when a good one exists
* Smoke and mirrors: Demonstrating how unimplemented functions will appear
* Software bloat: Allowing successive versions of a system to demand ever more resources
* Spaghetti code: Systems whose structure is barely comprehensible, especially because of misuse of code structures
* Stovepipe system: A barely maintainable assemblage of ill-related components
* Yo-yo problem: A structure (e.g. of inheritance) that is hard to understand due to excessive fragmentation
Unusual software bugs are more difficult to understand and repair than ordinary software bugs. There are several kinds, mostly named after
scientists who discovered counterintuitive things.
Heisenbugs
A heisenbug is a computer bug that disappears or alters its characteristics when it is researched.
Common examples are bugs that occur in a release-mode compile of a program but do not occur when researched under debug-mode, or
some bugs caused by a race condition. The name is a pun on the physics term "Heisenberg Uncertainty principle", which is popularly
believed to refer to the way observers affect the observed in quantum mechanics.
In an interview in ACM Queue vol. 2, no. 8 - November 2004 Bruce Lindsay tells of being there when the term was first used and that it was
created because Heisenberg said "the more closely you look at one thing, the less closely can you see something else."
A Bohr bug (named after the Bohr atom model) is a bug that, in contrast with heisenbugs, does not disappear or alter its characteristics
when it is researched.
Mandelbugs
A mandelbug (named after fractal innovator Benoît Mandelbrot) is a computer bug whose causes are so complex that its behavior appears
chaotic. This word also implies that the speaker thinks it is a Bohr bug rather than a heisenbug.
It can be argued, according to same principle as the Turing test, that if there is no way for a judge to differentiate between a bug whose
behavior appears chaotic and a bug whose behavior actually is chaotic, then there is no relevance in the distinction between mandelbug
and heisenbug, since there is no way to tell them apart.
Some use mandelbug to describe a bug whose behavior does not appear chaotic, but whose causes are so complex that there is no
practical solution. An example of this is a bug caused by a flaw in the fundamental design of the entire system.
Schroedinbugs
A Schroedinbug is a bug that manifests itself apparently only after the software is used in an unusual way or seemingly at the point in time
that a programmer reading the source code notices that the program should never have worked in the first place, at which point the
program stops working entirely until the mysteriously now non-functioning code is repaired. FOLDOC, in a statement of apparent jest, adds:
"Though... this sounds impossible, it happens; some programs have harboured latent schroedinbugs for years."
The name schroedinbug is derived from the Schrödinger's cat thought experiment. A well written program executing in a reliable computing
environment is expected to follow the principle of determinism, and as such the quantum questions of observability (i.e. breaking the
program by reading the source code) posited by Schrödinger (i.e. killing the cat by opening the box) cannot actually affect the operation of a
program. However, quickly repairing an obviously defective piece of code is often more important than attempting to determine by what
arcane set of circumstances it accidentally worked in the first place or exactly why it stopped. By declaring that the code could never have
worked in the first place despite evidence to the contrary, the complexity of the computing system is causing the programmer to fall back on
superstition. For example, a database program may have initially worked on a small number of records, including test data used during
development, but broke once the amount of data reached a certain limit, without this cause being at all intuitive. A programmer without
knowing the cause, and who didn't bother to consider the normal uptick in the database size as a factor in the breakage, could label the
defect a schroedinbug.
Appearance in Fiction
In the independent movie 'Schrödinger's Cat', a Schroedinbug is found in the programming of American defense systems, causing a
catastrophic security failure.
Software Quality Assurance
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-
upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.
• Application of Technical Methods (Employing proper methods and tools for developing software)
• Conduct of Formal Technical Review (FTR)
• Testing of Software
• Enforcement of Standards (Customer imposed standards or management imposed standards)
• Control of Change (Assess the need for change, document the change)
• Measurement (Software Metrics to measure the quality, quantifiable)
• Records Keeping and Recording (Documentation, reviewed, change control etc. i.e. benefits of docs).
What is quality?
Quality software is software that is reasonably bug-free, delivered on time and within budget, meets requirements and expectations and is
maintainable. However, quality is a subjective term. Quality depends on who the customer is and their overall influence in the scheme of
things. Customers of a software development project include end-users, customer acceptance test engineers, testers, customer contract
officers, customer management, the development organization's management, test engineers, testers, salespeople, software engineers,
stockholders and accountants. Each type of customer will have his or her own slant on quality. The accounting department might define
quality in terms of profits, while an end-user might define quality as user friendly and bug free.
Software Testing
Software testing is a critical component of the software engineering process. It is an element of software quality assurance and can be
described as a process of running a program in such a manner as to uncover any errors. This process, while seen by some as tedious,
tiresome and unnecessary, plays a vital role in software development.
Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A
of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and
abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or
things don't happen when they should. It is oriented to 'detection'.
Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one
group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA
processes monitored by project managers. It will depend on what best fits an organization's size and business structure.
Quality Assurance
(1)The planned systematic activities necessary to ensure that a component, module, or system conforms to established technical
requirements.
(2) All actions that are taken to ensure that a development organization delivers products that meet performance requirements and adhere
to standards and procedures.
(3) The policy, procedures, and systematic actions established in an enterprise for the purpose of providing and maintaining some degree of
confidence in data integrity and accuracy throughout the life cycle of the data, which includes input, update, manipulation, and output.
(4) (QA) The actions, planned and performed, to provide confidence that all systems and components that influence the quality of the
product are working as expected individually and collectively.
Quality Control
The operational techniques and procedures used to achieve quality requirements.
2. Module testing:
A module is a collection of dependent components such as an object class, an abstract data type or some looser collection of procedures
and functions. A module encapsulates related components so it can be tested without other system modules.
3. Sub-system testing: (Integration Testing) (Design Oriented)
This phase involves testing collections of modules, which have been integrated into sub-systems. Sub-systems may be independently
designed and implemented. The most common problems, which arise in large software systems, are sub-systems interface mismatches.
The sub-system test process should therefore concentrate on the detection of interface errors by rigorously exercising these interfaces.
4. System testing:
The sub-systems are integrated to make up the entire system. The testing process is concerned with finding errors that result from
unanticipated interactions between sub-systems and system components. It is also concerned with validating that the system meets its
functional and non-functional requirements.
5. Acceptance testing:
This is the final stage in the testing process before the system is accepted for operational use. The system is tested with data supplied by
the system client rather than simulated test data. Acceptance testing may reveal errors and omissions in the systems requirements
definition( user - oriented) because real data exercises the system in different ways from the test data. Acceptance testing may also reveal
requirement problems where the system facilities do not really meet the users needs (functional) or the system performance (non-
functional) is unacceptable.
Acceptance testing is sometimes called alpha testing. Bespoke systems are developed for a single client. The alpha testing process
continues until the system developer and the client agrees that the delivered system is an acceptable implementation of the system
requirements.
When a system is to be marketed as a software product, a testing process called beta testing is often used.
Beta testing involves delivering a system to a number of potential customers who agree to use that system. They report problems to the
system developers. This exposes the product to real use and detects errors that may not have been anticipated by the system builders.
After this feedback, the system is modified and either released fur further beta testing or for general sale.
Software Testing Strategies
A strategy for software testing integrates software test case design techniques into a well - planned series of steps that result in the
successful construction of software.
Testing Strategies
Strategy is a general approach rather than a method of devising particular systems for component tests.
Different strategies may be adopted depending on the type of system to be tested and the development process used. The testing
strategies are
Top-Down Testing
Bottom - Up Testing
Thread Testing
Stress Testing
Back- to Back Testing
1. Top-down testing
Where testing starts with the most abstract component and works downwards.
2. Bottom-up testing
Where testing starts with the fundamental components and works upwards.
3. Thread testing
Which is used for systems with multiple processes where the processing of a transaction threads its way through these processes.
4. Stress testing
Which relies on stressing the system by going beyond its specified limits and hence testing how well the system can cope with over-load
situations.
5. Back-to-back testing
Which is used when versions of a system are available. The systems are tested together and their outputs are compared. 6. Performance
testing.
This is used to test the run-time performance of software.
7. Security testing.
This attempts to verify that protection mechanisms built into system will protect it from improper penetration.
8. Recovery testing.
This forces software to fail in a variety ways and verifies that recovery is properly performed.
Large systems are usually tested using a mixture of these strategies rather than any single approach. Different strategies may be needed
for different parts of the system and at different stages in the testing process.
Whatever testing strategy is adopted, it is always sensible to adopt an incremental approach to sub-system and system testing. Rather than
integrate all components into a system and then start testing, the system should be tested incrementally. Each increment should be tested
before the next increment is added to the system. This process should continue until all modules have been incorporated into the system.
When a module is introduced at some stage in this process, tests, which were previously unsuccessful, may now, detect defects. These
defects are probably due to interactions with the new module. The source of the problem is localized to some extent, thus simplifying defect
location and repai
Debugging
Brute force, backtracking, cause elimination.
Unit Testing Coding Focuses on each module and whether it works properly. Makes heavy use of white box testing
Centered on making sure that each module works with another module.
Comprised of two kinds:
Integration Top-down and
Design
Testing Bottom-up integration.
Or focuses on the design and construction of the software architecture.
Makes heavy use of Black Box testing.(Either answer is acceptable)
Validation Testing Analysis Ensuring conformity with requirements
Systems Making sure that the software product works with the external environment, e.g., computer system,
Systems Testing
Engineering other software products.
Driver and Stubs
What is validation?
Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual
testing and takes place after verifications are completed.
What is a walk-through?
A walk-through (in software QA) is an informal meeting for evaluation or informational purposes. A walk-through is also a process at an
abstract level. It's the process of inspecting software code by following paths through the code (as determined by input conditions and
choices made along the way).
The purpose of code walk-throughs (in software development) is to ensure the code fits the purpose. Walk-throughs also offer opportunities
to assess an individual's or team's competency.
A walk-through is also a static analysis technique in which a programmer leads participants through a segment of documentation or code,
and the participants ask questions, and make comments about possible errors, violations of development standards, and other issues.
What is verification?
Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate
documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walk-throughs and inspection
meetings.
Verification (Process Oriented)
Verification involves checking to see whether the program conforms to specification. I.e the right tools and methods have been employed.
Thus, it focuses on process correctness.
What is V&V?
"V&V" is an acronym that stands for verification and validation.
"Validation: are we building the product right"
"Verification: are we building the right product"
Verification and validation (V&V) is a process that helps to determine if the software requirements are complete, correct; and if the software
of each development phase fulfills the requirements and conditions imposed by the previous phase; and if the final software complies with
the applicable software requirements.
software verification
In general the demonstration of consistency, completeness, and correctness of the software at each stage and between each stage of the
development life cycle.
Life Cycle
The period that starts when a software product is conceived and ends when the product is no longer available for use. The software life
cycle typically includes a requirements phase, design phase, implementation (code) phase, test phase, installation and checkout phase,
operation and maintenance phase, and a retirement phase.
What is SDLC?
SDLC is an acronym. It stands for "software development life cycle".
Audit
(1)An independent examination of a work product or set of work products to assess compliance with specifications, standards, contractual
agreements, or other criteria.
(2)To conduct an independent review and examination of system records and activities in order to test the adequacy and effectiveness of
data security and data integrity procedures, to ensure compliance with established policy and operational procedures, and to recommend
any necessary changes.
Boundary value
(1)A data value that corresponds to a minimum or maximum input, internal, or output value specified for a system or component.
(2)A value which lies at, or just inside or just outside a specified range of valid input and output values.
Branch coverage
A test coverage criteria which requires that for each decision point each possible branch be executed at least once. Syn: decision coverage.
Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage.
Equivalence Partitioning
Input data of a program is divided into different categories so that test cases can be developed for each category of input data. The goal of
equivalence partitioning is to come out with test cases so that errors are uncovered and test cases can be carried out more efficiently. The
different categories of input data are called Equivalence Classes.
Manual Testing
That part of software testing that requires operator input, analysis, or evaluation.
or
A manual test is a test for which there is no automation. Instead, test steps are outlined in a document for the tester to complete. The tester
can then report test results and submit bugs as appropriate.
Mean
A value derived by adding several qualities and dividing the sum by the number of these quantities.
Measurement
The act or process of measuring. A figure, extent, or amount obtained by measuring.
Bottom-Up Strategy
Bottom-up approach, as the name suggests, is the opposite of the Top-down method.
This process starts with building and testing the low level modules first, working its way up the hierarchy.
Because the modules at the low levels are very specific, we may need to combine several of them into what is sometimes called a cluster or
build in order to test them properly.
Then to test these builds, a test driver has to be written and put in place.
The advantage of Bottom-up integration is that there is no need for program stubs as we start developing and testing with the actual
modules.
Starting at the bottom of the hierarchy also means that the critical modules are usually build first and therefore any errors in these modules
are discovered early in the process.
As with Top-down integration, there are some drawbacks to this procedure.
In order to test the modules we have to build the test drivers which are more complex than stubs. And in addition to that they themselves
have to be tested. So more effort is required.
A major disadvantage to Bottom-up integration is that no working model can be presented or tested until many modules have been built.
Big-Bang Strategy
Big-Bang approach is very simple in its philosophy where basically all the modules or builds are constructed and tested independently of
each other and when they are finished, they are all put together at the same time.
The main advantage of this approach is that it is very quick as no drivers or stubs are needed, thus cutting down on the development time.
However, as with anything that is quickly slapped together, this process usually yields more errors than the other two. Since these errors
have to be fixed and take more time to fix than errors at the module level, this method is usually considered the least effective.
Because of the amount of coordination that is required it is also very demanding on the resources. Another drawback is that there is really
nothing to demonstrate until all the modules have been built and integrated
Inspection
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes.
The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and
see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will
be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is
difficult, painstaking work, but is one of the most cost effective methods of ensuring quality. Employees who are most skilled at inspections
are like the 'eldest brother' in the parable in 'Why is it often hard for management to get serious about quality assurance?'. Their skill may
have low visibility but they are extremely valuable to any software development organization, since bug prevention is far more cost-effective
than bug detection.
or
1) A formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the
author to detect faults, violations of development standards, and other problems. 2) A quality improvement process for written material that
consists of two dominant components: product (document) improvement and process improvement (document production and inspection).
Instrument: To install or insert devices or instructions into hardware or software to monitor the operation of a system or component.
What is an inspection?
An inspection is a formal meeting, more formalized than a walk-through and typically consists of 3-10 people including a moderator, reader
(the author of whatever is being reviewed) and a recorder (to make notes in the document). The subject of the inspection is typically a
document, such as a requirements document or a test plan. The purpose of an inspection is to find problems and see what is missing, not
to fix anything. The result of the meeting should be documented in a written report. Attendees should prepare for this type of meeting by
reading through the document, before the meeting starts; most problems are found during this preparation. Preparation for inspections is
difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is more cost effective than bug detection.
Code inspection
A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask
questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and
analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be
applied to other software and configuration items.
Code review
A meeting at which software code is presented to project personnel, managers, users, customers, or other interested parties for comment
or approval. Contrast with code audit, code inspection, code walkthrough.
Code walkthrough
A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a
small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
Contrast with code audit, code inspection, code review
Coverage analysis
Determining and assessing measures associated with the invocation of program structural elements to determine the adequacy of a test
run. Coverage analysis is useful when attempting to execute each statement, branch, path, or iterative structure in a program. Tools that
capture this data and provide reports summarizing relevant information have this feature.
Crash
The sudden and complete failure of a computer system or component.
Criticality
The degree of impact that a requirement, module, error, fault, failure, or other item has on the development or operation of a system. Syn:
severity.
Cyclomatic complexity
(1)The number of independent paths through a program.
(2)The cyclomatic complexity of a program is equivalent to the number of decision statements plus 1.
Error
A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or
condition.
Error guessing
Test data selection technique. The selection criterion is to pick values that seem likely to cause errors.
Error seeding
error seeding. (IEEE) The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring
the rate of detection and removal, and estimating the number of faults remaining in the program. Contrast with mutation analysis.
Eexception
An event that causes suspension of normal program execution. Types include addressing exception, data exception, operation exception,
overflow exception, protection exception, and underflow exception.
Failure
The inability of a system or component to perform its required functions within specified performance requirements.
Fault
An incorrect step, process, or data definition in a computer program which causes the program to perform in an unintended or unanticipated
manner.
Review
A process or meeting during which a work product or set of work products, is presented to project personnel, managers, users, customers,
or other interested parties for comment or approval. Types include code review, design review, formal qualification review, requirements
review, test readiness review. Contrast with audit, inspection.
Risk
A measure of the probability and severity of undesired effects. Often taken as the simple product of probability and consequence.
Risk Assessment
A comprehensive evaluation of the risk and its associated impact.
Software Review
An evaluation of software elements to ascertain discrepancies from planned results and to recommend improvement. This evaluation
follows a formal process.
Static analysis
(1) Analysis of a program that is performed without executing the program.
(2)The process of evaluating a system or component based on its form, structure, content, documentation. Contrast with dynamic analysis.
Test
An activity in which a system or component is executed under specified conditions, the results are observed or recorded and an evaluation
is made of some aspect of the system or component.
Testability
(1) The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine
whether those criteria have been met.
(2) The degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine
whether those criteria have been met.
before creating test cases to "break the system", a few principles have to be observed:
Testing should be based on user requirements. This is in order to uncover any defects that might cause the program or system to fail to
meet the client's requirements.
Testing time and resources are limited. Avoid redundant tests.
It is impossible to test everything. Exhaustive tests of all possible scenarios are impossible, simple because of the many different variables
affecting the system and the number of paths a program flow might take.
Use effective resources to test. This represents use of the most suitable tools, procedures and individuals to conduct the tests. The test
team should use tools that they are confident and familiar with. Testing procedures should be clearly defined. Testing personnel may be a
technical group of people independent of the developers.
Test planning should be done early. This is because test planning can begin independently of coding and as soon as the client
requirements are set.
Testing should begin at the module. The focus of testing should be concentrated on the smallest programming units first and then expand to
other parts of the system.
We look at software testing in the traditional (procedural) sense and then describe some testing strategies and methods used in Object
Oriented environment. We also introduce some issues with software testing in both environments.
Test case
Documentation specifying inputs, predicted results, and a set of execution conditions for a test item.
A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is
working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input
data requirements, steps, and expected results.
Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires
completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle
if possible.
or
The definition of test case differs from company to company, engineer to engineer, and even project to project. A test case usually includes
an identified set of information about observable states, conditions, events, and data, including inputs and expected outputs.
Scenario-based Testing
This form of testing concentrates on what the user does. It basically involves capturing the user actions and then simulating them and
similar actions during the test. These tests tend to find interaction type of errors
Test design
Documentation specifying the details of the test approach for a software feature or combination of software features and identifying the
associated tests.
Test documentation
Documentation describing plans for, or results of, the testing of a system or component, Types include test case specification, test incident
report, test log, test plan, test procedure, test report.
Test driver
A software module used to invoke a module under test and, often, provide test inputs, control and monitor execution, and report test results.
Test item
A software item which is the object of testing.
Test log.
A chronological record of all relevant details about the execution of a test.
Test phase
The period of time in the software life cycle in which the components of a software product are evaluated and integrated, and the software
product is evaluated to determine whether or not requirements have been satisfied.
Test plan
Documentation specifying the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to
be tested, the testing tasks, responsibilities, required, resources, and any risks requiring contingency planning.
or
A formal or informal plan to be followed to assure the controlled testing of the product under test.
Test procedure
A formal document developed from a test plan that presents detailed instructions for the setup, operation, and evaluation of the results for
each defined test.
Technical Review
A review that refers to content of the technical material being reviewed.
Test Development
The development of anything required to conduct testing. This may include test requirements (objectives), strategies, processes, plans,
software, procedures, cases, documentation, etc.
Test Executive
Another term for test harness.
Test Harness
A software tool that enables the testing of software components that links test capabilities to perform specific tests, accept program inputs,
simulate missing components, compare actual outputs with expected outputs to determine correctness, and report discrepancies.
Test Objective
An identified set of software features to be measured under specified conditions by comparing actual behavior with the required behavior
described in the software documentation.
Test Procedure
The formal or informal procedure that will be followed to execute a test. This is usually a written document that allows others to execute the
test with a minimum of training.
Testing
Any activity aimed at evaluating an attribute or capability of a program or system to determine that it meets its required results. The process
of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements
or to identify differences between expected and actual results.
Top-down Testing
An integration testing technique that tests the high-level components first using stubs for lower-level called components that have not yet
been integrated and that stimulate the required actions of those components
Test report
A document describing the conduct and results of the testing carried out for a system or system component.
Testing <
(1)The process of operating a system or component under specified conditions, observing or recording the results, and making an
evaluation of some aspect of the system or component.
(2) The process of analyzing a software item to detect the differences between existing and required conditions, i.e. bugs, and to evaluate
the features of the software items.
Acceptance Testing
Testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether
or not to accept the system. Contrast with testing, development; testing, operational.
or
Formal testing conducted to determine whether or not a system satisfies its acceptance criteria—enables an end user to determine whether
or not to accept the system.
Boundary Value Analysis is a method of testing that complements equivalence partitioning. In this case, data input as well as data output
are tested. The rationale behind BVA is that the errors typically occur at the boundaries of the data. The boundaries refer to the upper limit
and the lower limit of a range of values or more commonly known as the "edges" of the boundary.
Branch Testing
Testing technique to satisfy coverage criteria which require that for each decision point, each possible branch [outcome] be executed at
least once. Contrast with testing, path; testing, statement
Alpha Testing
Acceptance testing performed by the customer in a controlled environment at the developer's site. The software is used by the customer in
a setting approximating the target environment with the developer observing and recording errors and usage problems.
or
Testing of a software product or system conducted at the developer’s site by the end user.
Assertion Testing
A dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth
of the assertions is determined as the program executes.
Beta Testing
(1)Acceptance testing performed by the customer in a live application of the software, at one or more end user sites, in an environment not
controlled by the developer.
(2) For medical device software such use may require an Investigational Device Exemption [IDE] or Institutional Review Board [IRB]
approval.
or
Testing conducted at one or more end user sites by the end user of a delivered software product or system.
Integration
The process of combining software components or hardware components, or both, into an overall system.
Integration Testing
An orderly progression of testing in which software elements, hardware elements, or both are combined and tested, to evaluate their
interactions, until the entire system has been integrated.
OO Integration Testing
This strategy involves testing the classes as they are integrated into the system. The traditional approach would test each operation
separately as they are implemented into a class. In OO system this approach is not viable because of the "direct and indirect interactions of
the components that make up the class"
. Integration testing in OO can be performed in two basic ways :
- Thread-based - Takes all the classes needed to react to a given input. Each class is unit tested and then thread constructed from these
classes tested as a set.
- Uses-based - Tests classes in groups. Once the group is tested, the next group that uses the first group (dependent classes) is tested.
Then the group that uses the second group and so on. Use of stubs or drivers may be necessary. Cluster testing is similar to testing builds
in the traditional model. Basically collaborating classes are tested in clusters.
Functional Testing
(1) Testing that ignores the internal mechanism or structure of a system or component and focuses on the outputs generated in response to
selected inputs and execution conditions.
(2) Testing conducted to evaluate the compliance of a system or component with specified functional requirements and corresponding
predicted results. (3) Application of test data derived from the specified functional requirements without regard to the final program
structure. Also known as black-box testing.
Interface Testing
Testing conducted to evaluate whether systems or components pass data and control correctly to one another. Contrast with testing, unit;
testing, system.
Mutation Testing
A testing methodology in which two or more program mutations are executed using the same test cases to evaluate the ability of the test
cases to detect differences in the mutations.
Operational Testing
Testing conducted to evaluate a system or component in its operational environment. Contrast with testing, development; testing,
acceptance;
Parallel Testing
Testing a new or an altered data processing system with the same source data that is used in another system. The other system is
considered as the standard of comparison.
Audit
An inspection/assessment activity that verifies compliance with plans, policies, and procedures, and ensures that resources are conserved.
Audit is a staff function; it serves as the “eyes and ears” of management.
Performance Testing
Functional testing conducted to evaluate the compliance of a system or component with specified performance requirements.
Qualification Testing
Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements.
Statement Testing
Testing to satisfy the criterion that each statement in a program be executed at least once during program testing.
Storage Testing
This is a determination of whether or not certain processing conditions use more storage [memory] than estimated
Regression Testing
Rerunning test cases which a program has previously executed correctly in order to detect errors spawned by changes or corrections made
during software development and maintenance.
Stress Testing
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.
System
A collection of people, machines, and methods organized to accomplish a set of specified functions.
System Simulation
Another name for prototyping.
System Testing
The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. Such testing
may be conducted in both the development environment and the target environment.
or
The process of testing an integrated hardware and software system to verify that the system meets its specified requirements.
System Testing
Final stage of the testing process should be System Testing. This type of test involves examination of the whole computer system. All the
software components, all the hardware components and any interfaces.
The whole computer based system is checked not only for validity but also for met objectives.
It should include recovery testing, security testing, stress testing and performance testing.
Recovery testing uses test cases designed to examine how easily and completely the system can recover from a disaster (power shut
down, blown circuit, disk crash, interface failure, insufficient memory, etc.). It is desirable to have a system capable of recovering quickly
and with minimal human intervention. It should also have a log of activities happening before the crash (these should be part of daily
operations) and a log of messages during the failure (if possible) and upon re-start.
Security testing involves testing the system in order to make sure that unauthorized personnel or other systems cannot gain access to the
system and information or resources within it. Programs that check for access to the system via passwords are tested along with any
organizational security procedures established.
Stress testing encompasses creating unusual loads on the system in attempts to brake it. System is monitored for performance loss and
susceptibility to crashing during the load times. If it does crash as a result of high load, it provides for just one more recovery test.
Performance testing involves monitoring and recording the performance levels during regular and low and high stress loads. It tests the
amount of resource usage under the just described conditions and serves as basis for making a forecast of additional resources needed (if
any) in the future. It is important to note that performance objectives should have been developed during the planning stage and
performance testing is to assure that these objectives are being met. However, these tests may be run in initial stages of production to
compare the actual usage to the forecasted figures.
OO Unit Testing
In OO paradigm it is no longer possible to test individual operations as units. Instead they are tested as part of the class and the class or an
instance of a class (object) then represents the smallest testable unit or module. Because of inheritance, testing individual operation
separately (independently of the class) would not be very effective, as they interact with each other by modifying the state of the object they
are applied to.
What is usability?
"Usability" means ease of use; the ease with which a user can learn to operate, prepare inputs for, and interpret the outputs of a software
product.
Usability Testing
Tests designed to evaluate the machine/user interface. Are the communication device(s) designed in a manner such that the information is
displayed in a understandable fashion enabling the operator to correctly interact with the system?
Volume Testing
Testing designed to challenge a system's ability to manage the maximum amount of data over a period of time. This type of testing also
evaluates a system's ability to handle overload situations in an orderly fashion
black box testing
Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing are based on
requirements and functionality.
What is a QA engineer?
We, QA engineers, are test engineers but we do more than just testing. Good QA engineers understand the entire software development
process and how it fits into the business approach and the goals of the organization. Communication skills and the ability to understand
various sides of issues are important. We, QA engineers, are successful if people listen to us, if people use our tests, if people think that
we're useful, and if we're happy doing our work. I would love to see QA departments staffed with experienced software developers who
coach development teams to write better code. But I've never seen it. Instead of coaching, we, QA engineers, tend to be process people.
ISO
ISO = 'International Organisation for Standardization' - The ISO 9001:2000 standard (which replaces the previous standard of 1994)
concerns quality systems that are assessed by outside auditors, and it applies to many kinds of production and manufacturing
organizations, not just software. It covers documentation, design, development, production, testing, installation, servicing, and other
processes. The full set of standards consists of: (a)Q9001-2000 - Quality Management Systems: Requirements; (b)Q9000-2000 - Quality
Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for Performance
Improvements. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years,
after which a complete reassessment is required. Note that ISO certification does not necessarily indicate quality products - it indicates only
that documented processes are followed. Also see http://www.iso.ch/ for the latest information. In the U.S. the standards can be purchased
via the ASQ web site at http://e-standards.asq.org/
IEEE
IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards such as 'IEEE Standard for Software Test
Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for
Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.
ANSI
ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.; publishes some software-related
standards in conjunction with the IEEE and ASQ (American Society for Quality). Other software development/IT management process
assessment methods besides CMMI and ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF, and CobiT.
SEI
SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software
development processes.
WinRunner: Should I sign up for a course at a nearby educational institution?
When you're employed, the cheapest or free education is sometimes provided on the job, by your employer, while you are getting paid to do
a job that requires the use of WinRunner and many other software testing tools.
If you're employed but have little or no time, you could still attend classes at nearby educational institutions.
If you're not employed at the moment, then you've got more time than everyone else, so that's when you definitely want to sign up for
courses at nearby educational institutions. Classroom education, especially non-degree courses in local community colleges, tends to be
cheap.
What is up time?
"Up time" is the time period when a system is operational and in service. Up time is the sum of busy time and idle time. For example, if, out
of 168 hours, a system has been busy for 50 hours, idle for 110 hours, and down for 8 hours, then the busy time is 50 hours, idle time is
110 hours, and up time is (110 + 50 =) 160 hours.
Interface A shared boundary. An interface might be a hardware component to link two devices, or it might be a portion of storage or
registers accessed by two or more computer programs.
Interface Analysis Checks the interfaces between program elements for consistency and adherence to predefined rules or axioms.
What is a utility?
"Utility" is a software tool designed to perform some frequently used support function. For example, one utility is a program to print files.
What is utilization?
"Utilization" is the ratio of time a system is busy (i.e. working for us), divided by the time it is available. For example, if a system was
available for 160 hours and busy for 40 hours, then utilization was (40/160 =) 25 per cent. Utilization is a useful measure in evaluating
computer performance.
What is a variable?
"Variables" are data items in a program whose values can change. There are local and global variables. One example is a variable we have
named "capacitor_voltage_10000", where "capacitor_voltage_10000" can be any whole number between -10000 and +10000.
What is a variant?
"Variants" are versions of a program. Variants result from the application of software diversity.
What is VDD?
"VDD" is an acronym that stands for "version description document".
What is a waiver?
In software QA, a waiver is an authorization to accept software that has been submitted for inspection, found to depart from specified
requirements, but is nevertheless considered suitable for use "as is", or after rework by an approved method.
What is a waterfall model?
Waterfall is a model of the software development process in which the concept phase, requirements phase, design phase, implementation
phase, test phase, installation phase, and checkout phase are performed in that order, probably with overlap, but with little or no iteration.
How can I be effective and efficient, when I'm testing e-commerce web sites?
When you're doing black box testing of an e-commerce web site, you're most efficient and effective when you're testing the site's visual
appeal, content, and home page. When you want to be effective and efficient, you need to verify that the site is well planned; verify that the
site is customer-friendly; verify that the choices of colors are attractive; verify that the choices of fonts are attractive; verify that the site's
audio is customer friendly; verify that the site's video is attractive; verify that the choice of graphics is attractive; verify that every page of the
site is displayed properly on all the popular browsers; verify the authenticity of facts; ensure the site provides reliable and consistent
information; test the site for appearance; test the site for grammatical and spelling errors; test the site for visual appeal, choice of browsers,
consistency of font size, download time, broken links, missing links, incorrect links, and browser compatibility; test each toolbar, each menu
item, every window, every field prompt, every pop-up text, and every error message; test every page of the site for left and right
justifications, every shortcut key, each control, each push button, every radio button, and each item on every drop-down menu; test each list
box, and each help menu item. Also check, if the command buttons are grayed out when they're not in use.
What is a backward compatible design?
The design is backward compatible, if the design continues to work with earlier versions of a language, program, code, or software. When
the design is backward compatible, the signals or data that has to be changed does not break the existing code.
For instance, a (mythical) web designer decides he should make some changes, because the fun of using Javascript and Flash is more
important (to his customers) than his backward compatible design. Or, alternatively, he decides, he has to make some changes because he
doesn't have the resources to maintain multiple styles of backward compatible web design. Therefore, our mythical web designer's decision
will inconvenience some users, because some of the earlier versions of Internet Explorer and Netscape will not display his web pages
properly (as there are some serious improvements in the newer versions of Internet Explorer and Netscape that make the older versions of
these browsers incompatible with, for example, DHTML). This is when we say, "Our (mythical) web designer's code fails to work with earlier
versions of browser software, therefore his design is not backward compatible".
On the other hand, if the same mythical web designer decides that backward compatibility is more important than fun, or, if he decides that
he does have the resources to maintain multiple styles of backward compatible code, then, obviously, no user will be inconvenienced when
Microsoft or Netscape make some serious improvements in their web browsers. This is when we can say, "Our mythical web designer's
design is backward compatible".
What is a parameter?
In software QA or software testing, a parameter is an item of information - such as a name, number, or selected option - that is passed to a
program, by a user or another program. By definition, in software, a parameter is a value on which something else depends. Any desired
numerical value may be given as a parameter. In software development, we use parameters when we want to allow a specified range of
variables. We use parameters when we want to differentiate behavior or pass input data to computer programs or their subprograms. Thus,
when we are testing, the parameters of the test can be varied to produce different results, because parameters do affect the operation of
the program receiving them.
Example 1: We use a parameter, such as temperature, that defines a system. In this definition, it is temperature that defines the system and
determines its behavior.
Example 2: In the definition of function f(x) = x + 10, x is a parameter. In this definition, x defines the f(x) function and determines its
behavior. Thus, when we are testing, x can be varied to make f(x) produce different values, because the value of x does affect the value of
f(x).
When parameters are passed to a function subroutine, they are called arguments.
What is a constant?
In software or software testing, a constant is a meaningful name that represents a number, or string, that does not change. Constants are
variables that remain the same, i.e. constant, throughout the execution of a program.
Why do we, developers, use constants? Because if we have code that contains constant values that keep reappearing, or, if we have code
that depends on certain numbers that are difficult to remember, we can improve both the readability and maintainability of our code, by
using constants.
To give you an example, we declare a constant and we call it "Pi". We set it to 3.14159265, and use it throughout our code. Constants, such
as Pi, as the name implies, store values that remain constant throughout the execution of our program.
Keep in mind that, unlike variables which can be read from and written to, constants are read-only. Although constants resemble variables,
we cannot modify or assign new values to them, as we can to variables, but we can make constants public or private. We can also specify
what data type they are
What testing approaches can you tell me about?
Each of the followings represents a different testing approach: black box testing, white box testing, unit testing, incremental testing,
integration testing, functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing,
performance testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-
hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing, and mutation testing.
To learn to use WinRunner, should I sign up for a course at a nearby educational institution?
Free, or inexpensive, education is often provided on the job, by an employer, while one is getting paid to do a job that requires the use of
WinRunner and many other software testing tools.
In lieu of a job, it is often a good idea to sign up for courses at nearby educational institutes. Classes, especially non-degree courses in
community colleges, tend to be inexpensive.
Black-box Testing
-Functional testing based on requirements with no knowledge of the internal program structure or data. Also known as closed-box testing.
-Black box testing indicates whether or not a program meets required specifications by spotting faults of omission -- places where the
specification is not fulfilled.
-Black-box testing relies on the specification of the system or the component that is being tested to derive test cases. The system is a black-
box whose behavior can only be determined by studying its inputs and the related outputs
Affinity Diagram
A group process that takes large amounts of language data, such as a list developed by brainstorming, and divides it into categories.
Brainstorming
A group process for generating creative and diverse ideas.
Cause-effect Graphing
A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to
produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.
Checksheet
A form used to record data as it is gathered.
Clear-box Testing
Another term for white-box testing. Structural testing is sometimes referred to as clear-box testing, since “white boxes” are considered
opaque and do not really permit visibility into the code. This is also known as glass-box or open-box testing.
Client
The end user that pays for the product received, and receives the benefit from the use of the product.
Control Chart
A statistical method for distinguishing between common and special cause variation exhibited by processes.
Unit Testing
The testing done to show whether a unit (the smallest piece of software that can be independently compiled or assembled, loaded, and
tested) satisfies its functional specification or its implemented structure matches the intended design structure.
User
The end user that actually uses the product received.
V- Diagram (model)
a diagram that visualizes the order of testing activities and their corresponding phases of development
Validation
The process of evaluating software to determine compliance with specified requirements.
Verification
The process of evaluating the products of a given software development activity to determine correctness and consistency with respect to
the products and standards provided as input to that activity.
Walkthrough
Usually, a step-by-step simulation of the execution of a procedure, as when walking through code, line by line, with an imagined set of
inputs. The term has been extended to the review of material that is not procedural, such as data descriptions, reference manuals,
specifications, etc.
White-box Testing
1. Testing approaches that examine the program structure and derive test data from the program logic. This is also known as clear box
testing, glass-box or open-box testing. White box testing determines if program-code structure and logic is faulty. The test is accurate only if
the tester knows what the program is supposed to do. He or she can then see if the program diverges from its intended goal. White box
testing does not account for errors caused by omission, and all visible code must also be readable.
2. White box method relies on intimate knowledge of the code and a procedural design to derive the test cases. It is most widely utilized in
unit testing to determine all possible paths within a module, to execute all loops and to test all logical expressions.
Using white-box testing, the software engineer can (1) guarantee that all independent paths within a module have been exercised at least
once; (2) examine all logical decisions on their true and false sides; (3) execute all loops and test their operation at their limits; and (4)
exercise internal data structures to assure their validity (Pressman, 1997). This form of testing concentrates on the procedural detail.
However, there is no automated tool or testing system for this testing method. Therefore even for relatively small systems, exhaustive
white-box testing is impossible because of all the possible path permutations.
Customer (end user)
The individual or organization, internal or external to the producing organization, that receives the product.
Cyclomatic Complexity
A measure of the number of linearly independent paths through a program module.
Debugging
The act of attempting to determine the cause of the symptoms of malfunctions detected by testing or by frenzied user complaints.
Defect Analysis
Using defects as data for continuous quality improvement. Defect analysis generally seeks to classify defects into categories and identify
possible causes in order to direct process improvement efforts.
Defect Density
Ratio of the number of defects to program length (a relative number).
Defect
NOTE: Operationally, it is useful to work with two definitions of a defect:
1) From the producer’s viewpoint: a product requirement that has not been met or a product attribute possessed by a product or a function
performed by a product that is not in the statement of requirements that define the product.
2) From the end user’s viewpoint: anything that causes end user dissatisfaction, whether in the statement of requirements or not.
Error
1) A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or
condition; and
2) a mental mistake made by a programmer that may result in a program fault.
Error-based Testing
Testing where information about programming style, error-prone language constructs, and other programming knowledge is applied to
select test data capable of detecting faults, either a specified class of faults or all possible faults.
Desk Checking
A form of manual static analysis usually performed by the originator. Source code documentation, etc., is visually checked against
requirements and standards.
Dynamic Analysis
The process of evaluating a program based on execution of that program. Dynamic analysis approaches rely on executing a piece of
software with selected test data.
Dynamic Testing
Verification or validation performed which executes the system’s code.
Partition Testing
This method categorizes the inputs and outputs of a class in order to test them separately. This minimizes the number of test cases that
have to be designed.
To determine the different categories to test, partitioning can be broken down as follows:
- State-based partitioning - categorizes class operations based on how they change the state of a class
- Attribute-based partitioning - categorizes class operations based on attributes they use
- Category-based partitioning - categorizes class operations based on the generic function the operations perform
Evaluation
The process of examining a system or system component to determine the extent to which specified properties are present.
Execution
The process of a computer carrying out an instruction or instructions of a computer. Exhaustive Testing: Executing the program with all
possible combinations of values for program variables.
Failure
The inability of a system or system component to perform a required function within specified limits. A failure may be produced when a fault
is encountered.
Failure-directed Testing
Testing based on the knowledge of the types of errors made in the past that are likely for the system under test.
Fault
A manifestation of an error in software. A fault, if encountered, may cause a failure.
Fault-based Testing
1. Testing that employs a test data selection strategy designed to generate test data capable of demonstrating the absence of a set of pre-
specified faults, typically, frequently occurring faults.
2. This type of testing allows for designing test cases based on the client specification or the code or both. It tries to identify plausible faults
(areas of design or code that may lead to errors). For each of these faults a test case is developed to "flush" the errors out. These tests also
force each line of code to be executed
Flowchart
A diagram showing the sequential steps of a process or of a workflow around a product or service. Formal Review: A technical review
conducted with the end user, including the types of reviews called for in the standards.
Function Points
A consistent measure of software size based on user requirements. Data components include inputs, outputs, etc. Environment
characteristics include data communications, performance, reusability, operational ease, etc. Weight scale: 0 = not present; 1 = minor
influence, 5 = strong influence.
Heuristics Testing
Another term for failure-directed testing.
Histogram
A graphical description of individual measured values in a data set that is organized according to the frequency or relative frequency of
occurrence. A histogram illustrates the shape of the distribution of individual values in a data set along with information regarding the
average and variation.
Hybrid Testing
A combination of top-down testing combined with bottom-up testing of prioritized or available components.
Incremental Analysis
Incremental analysis occurs when (partial) analysis may be performed on an incomplete product to allow early feedback on the
development of that product.
Infeasible Path
Program statement sequence that can never be executed.
Inputs
Products, services, or information needed from suppliers to make a process work.
Operational Requirements
Qualitative and quantitative parameters that specify the desired operational capabilities of a system and serve as a basis for deter-mining
the operational effectiveness and suitability of a system prior to deployment
Intrusive Testing
Testing that collects timing and processing information during program execution that may change the behavior of the software from its
behavior in a real environment. Usually involves additional code embedded in the software being tested or additional processes running
concurrently with software being tested on the same platform.
Random Testing
This is one of methods used to exercise a class. It is based on developing a random test sequence that tries the minimum number of
operations typical to the behavior of the class.
Mutation Testing
A method to determine test set thoroughness by measuring the extent to which a test set can discriminate the program from slight variants
of the program.
Non-intrusive Testing
Testing that is transparent to the software under test; i.e., testing that does not change the timing or processing characteristics of the
software under test from its behavior in a real environment. Usually involves additional hardware that collects timing or processing
information and processes that information on another platform.
Operational Testing
Testing performed by the end user on software in its normal operating environment
Metric
A measure of the extent or degree to which a product possesses and exhibits a certain quality, property, or attribute.
Outputs
Products, services, or information supplied to meet end user needs.
Path Analysis
Program analysis performed to identify all possible paths through a program, to detect incomplete paths, or to discover portions of the
program that are not on any path.
Peer Reviews
A methodical examination of software work products by the producer’s peers to identify defects and areas where changes are needed.
Policy
Managerial desires and intents concerning either process (intended objectives) or products (desired attributes).
Problem
Any deviation from defined standards. Same as defect.
Test Bed
1) An environment that contains the integral hardware, instrumentation, simulators, software tools, and other support elements needed to
conduct a test of a logically or physically separate component.
2) A suite of test programs used in conducting the test of a component or system.
Procedure
The step-by-step method followed to ensure that standards are met.
Supplier
An individual or organization that supplies inputs needed to generate a product, service, or information to an end user.
Process
The work effort that produces a product. This includes efforts of people and equipment guided by policies, standards, and procedures.
ISSUES
Invariable there will be issues with software testing under both models. This is simply because both environments are dynamic and have to
deal with ongoing changes during the life cycle of the project. That means changes in specifications, analysis, design and development. All
of these of course affect testing. However, we will concentrate on possible problem areas within the testing strategies and methods. We will
examine how these issues pertain to each environment.
Syntax
1) The relationship among characters or groups of characters independent of their meanings or the manner of their interpretation and use;
2) the structure of expressions in a language; and
3) the rules governing the structure of the language.
Test Specifications
The test case specifications should be developed from the test plan and are the second phase of the test development life cycle. The test
specification should explain "how" to implement the test cases described in the test plan.
Test Specification Items
Each test specification should contain the following items:
Case No.: The test case number should be a three digit identifer of the following form: c.s.t, where: c- is the chapter number, s- is the
section number, and t- is the test case number.
Title: is the title of the test.
ProgName: is the program name containing the test.
Author: is the person who wrote the test specification.
Date: is the date of the last revision to the test case.
Background: (Objectives, Assumptions, References, Success Criteria): Describes in words how to conduct the test.
Expected Error(s): Describes any errors expected
Reference(s): Lists reference documententation used to design the specification.
Data: (Tx Data, Predicted Rx Data): Describes the data flows between the Implementation Under Test (IUT) and the test engine.
Script: (Pseudo Code for Coding Tests): Pseudo code (or real code) used to conduct the test.
Example Test Specification
Test Specification
Case No. 7.6.3 Title: Invalid Sequence Number (TC)
ProgName: UTEP221 Author: B.C.G. Date: 07/06/2000
Background: (Objectives, Assumptions, References, Success Criteria)
Validate that the IUT will reject a normal flow PIU with a transmissionheader that has an invalid sequence number.
Expected Sense Code: $2001, Sequence Number Error
Reference - SNA Format and Protocols Appendix G/p. 380
Data: (Tx Data, Predicted Rx Data)
IUT
<-------- DATA FIS, OIC, DR1 SNF=20
<-------- DATA LIS, SNF=20
--------> -RSP $2001