Sie sind auf Seite 1von 239

A Checklist of Common GUI Errors Found in

Windows, Child Windows, and Dialog Boxes.

1. Assure that the start-up icon for the application under consideration is unique
from all other current applications.
2. Assure the presence of a control menu in each window and dialog box.
3. Assure the correctness of the Multiple Document Interface (MDI) of each
window - Only the parent window should be modal (All child windows should be
presented within the confines of the parent window).
4. Assure that all windows have a consistent look and feel.
5. Assure that all dialog boxes have a consistent look and feel.
6. Assure that the child widows can be cascaded or tiled within the parent window.

7. Assure that icons which represent minimized child windows can be arranged
within the parent window.
8. Assure the existence of the "File" menu.
9. Assure the existence of the "Help" menu.
10. Assure the existence of a "Window" Menu.
11. Assure the existence and proper location of any other menus which are logically
required by the application.
12. Assure that the proper commands and options are in each menu.
13. Assure that all buttons on all tool bars have a corresponding menu commands.

14. Assure that each menu command has an alternative(hot-key) key sequence
which will invoke it where appropriate.
15. In "tabbed" dialog boxes, assure that the tab names are not abbreviations.
16. In "tabbed" dialog boxes, assure that the tabs can be accessed via appropriate hot
key combinations.
17. In "tabbed" dialoged boxes, assure that duplicate hot keys do not exist
18. Assure that tabs are placed horizontally across the top (avoid placing tabs
vertically on the sides as this makes the names hard to read).
19. Assure the proper usage of the escape key (which is to roll back any changes
that have been made).
20. Assure that the cancel button functions the same as the escape key.
21. Assure that the Cancel button becomes a Close button when changes have be
made that cannot be rolled back.
22. Assure that only command buttons which are used by a particular window, or in
a particular dialog box, are present.
23. When a command button is used sometimes and not at other times, assure that it
is grayed out when it should not be used.
24. Assure that OK and Cancel buttons are grouped separately from other command
buttons.
25. Assure that command button names are not abbreviations.
26. Assure that command button names are not technical labels, but rather are names
meaningful to system users.
27. Assure that command buttons are all of similar size and shape.
28. Assure that each command button can be accessed via a hot key combination
(except the OK and CANCEL buttons which do not normally have hot keys).
29. Assure that command buttons in the same window/dialog box do not have
duplicate hot keys.
30. Assure that each window/dialog box has a clearly marked default value
(command button, or other object) which is invoked when the Enter key is pressed.
31. Assure that focus is set to an object which makes sense according to the function
of the window/dialog box.
32. Assure that option button (AKA radio button) names are not abbreviations.
33. Assure that option button names are not technical labels, but rather are names
meaningful to system users.
34. If hot keys are used to access option buttons, assure that duplicate hot keys do
not exist in the same window/dialog box.
35. Assure that option box names are not abbreviations.
36. Assure that option box names are not technical labels, but rather are names
meaningful to system users.
37. If hot keys are used to access option boxes, assure that duplicate hot keys do not
exist in the same window/dialog box.
38. Assure that option boxes, option buttons, and command buttons are logically
grouped together in clearly demarcated areas.
39. Assure that each demarcated area has a meaningful name that is not an
abbreviation.
40. Assure that the Tab key sequence which traverses the defined areas does so in a
logical way.
41. Assure that the parent window has a status bar.
42. Assure that all user-related system messages are presented via the status bar.
43. Assure consistency of mouse actions across windows.
44. Assure that the color red is not used to highlight active GUI objects (many
individuals are red-green color blind).
45. Assure that the user will have control of the desktop with respect to general
color and highlighting (the application should not dictate the desktop background
characteristics).
46. Assure that the GUI does not have a cluttered appearance (GUIs should not be
designed to look like a mainframe character user interfaces (CUIs) when replacing
such data entry/retrieval screens)

Testing GUIs
Modern software applications often have sophisticated user interfaces. 
Because the number of lines of code (or reusable components) required for 
GUI implementation can often exceed the number of lines of code for other 
elements of the software, thorough testing of the user interface is essential. . 
For this checklist, the more questions that elicit a negative response, the 
higher the risk that the GUI will not adequately meet the needs of the end­
user.

For windows:

Will the window open properly based on related typed or menu­based 
commands?
Can the window be resized, moved, scrolled?
Is all data content contained within the window properly addressable with a 
mouse, function keys, directional arrows, and keyboard?
Does the window properly regenerate went it is overwritten and then 
recalled?
Are all functions that relate to the window available when needed?
Are all functions that related to the window operational?
Are all relevant pull­down menus, tool bars, scroll bars, dialog boxes and 
buttons, icons, and other controls available and properly displayed for the 
When multiple windows are displayed, is the name of the window properly 
window?
represented?
Is the active window properly highlighted?
If multitasking is used, are all windows updated at appropriate times?
Do multiple or incorrect mouse picks within the window cause unexpected 
side effects?
Are audio and/or color prompts within the window or as a consequence of 
window operations presented according to specification?
Does the window properly close?

For pull down menus and mouse operations:

Is the appropriate menu bar displayed in the appropriate context?
Does the application menu bar display system related features (e.g., a clock 
display)?
Do pull­down operations work properly?
Do break­away menus, palettes, and tool bars work properly?
Are all menu functions and pull down subfunctions properly listed?
Are all menu functions properly addressable by the mouse?
Are text typeface, size and format correct?
Is it possible to invoke each menu function using its alternative text­based 
command?
Are menu functions highlighted (or grayed­out) based on the context of 
current operations within a window?
Does each menu function perform as advertised?
Are the names of menu functions self explanatory?
Is help available for each menu item and is it context sensitive?
Are mouse operations properly recognized throughout the interactive 
context?
If multiple clicks are required, are they properly recognized in context?
If the mouse has multiple buttons, are they properly recognized in context?
Do the cursor, processing indicator (e.g., an hour glass or clock), and pointer 
properly change as different operations are invoked?

Data entry:

Is alphanumeric data entry properly echoed and input to the system?
Do graphical modes of data entry (e.g., a slide bar) work properly?
Is invalid data properly recognized?
Are data input messages intelligible?
Four Stages of GUI Testing

The four stages are summarised in Table 2 below. We can map the four test stages to traditional test s

Low level - maps to a unit test stage.


Application - maps to either a unit test or functional system test stage.
Integration - maps to a functional system test stage.
Non-functional - maps to non-functional system test stage.

Stage Test Types


Low Level Checklist testing
Navigation
Application Equivalence Partitioning
Boundary Values
Decision Tables
State Transition Testing
Integration Desktop Integration
C/S Communications
Synchronisation
Non-Functional Soak testing
Compatibility testing
Platform/environment

Checklist Testing

Programming/GUI standards covering standard features such as:


window size, positioning, type (modal/non-modal)
standard system commands/buttons (close, minimise, maximise etc.)

Application standards or conventions such as:


standard OK, cancel, continue buttons, appearance, colour, size, location
consistent use of buttons or controls
object/field labelling to use standard/consistent tex

Navigation Testing

To conduct meaningful navigation tests the following are required to be in place:

An application backbone with at least the required menu options and call mechanisms to call the win
Windows that can invoke the window under test.
Windows that are called by the window under test.

Obviously, if any of the above components are not available, stubs and/or drivers will be necessary to

For every window, identify all the legitimate calls to the window that the application should allow and
Identify all the legitimate calls from the window to other features that the application should allow and
Identify reversible calls, i.e. where closing a called window should return to the ‘calling’ window and c
Identify irreversible calls i.e. where the calling window closes before the called window appears.

There may be multiple ways of executing a call to another window i.e. menus, buttons, keyboard comm
Note that navigation tests reflect only a part of the full integration testing that should be undertaken. Th

Application Testing

Application testing is the testing that would normally be undertaken on a forms-based application. This

Technique Used to test


Equivalence Partitions and Input validation
Boundary Value Analysis
Simple rule-based processing

Decision Tables Complex logic or rule-based processing

State-transition testing Applications with modes or states where


processing behaviour is affected

Windows where there are dependencies


between objects in the window.

Desktop Integration Testing

We define desktop integration as the integration and testing of a client application with these other com

To derive a list of test cases the tester needs to ask a series of questions for each known interface:

Is there a dialogue between the application and interfacing product (i.e. a sequence of stages with di
Is information passed in both directions across the interface?
Is the call to the interfacing product context sensitive?
Are there different message types? If so, how can these be varied?
In principle, the tester should prepare test cases to exercise each message type in circumstances whe

Client/Server Communication Testing

Client/Server communication testing complements the desktop integration testing. This aspect covers t

In the most common situation, clients communicate directly with database servers. Here the particular

Logging into the network, servers and server-based DBMS.


Single and multiple responses to queries.
Correct handling of errors (where the SQL syntax is incorrect, or the database server or network has
Null and high volume responses (where no rows or a large number of rows are returned).

The response times of transactions that involve client/server communication may be of interest. These

Synchronisation Testing

There may be circumstances in the application under test where there are dependencies between diffe
Examples of synchronisation are when:

The application has different modes - if a particular window is open, then certain menu options becom
If the data in the database changes and these changes are notified to the application by an unsolicite
If data on a visible window is changed and makes data on another displayed window inconsistent.

In some circumstances, there may be reciprocity between windows. For example, changes on window

In the case of displayed data, there may be other windows that display the same or similar data which

Prepare one test case for every window object affected by a change or unsolicited event and one tes
Prepare one test case for every window object that must not be affected - but might be.

Non-Functional Testing

The tests described in the previous sections are functional tests. These tests are adequate for demons

Soak Testing

In production, systems might be operated continuously for many hours. Applications may be co

These tests are normally conducted using an automated tool. Selected transactions are repeat

Compatibility Testing

Whether applications interface directly with other desktop products or simply co-exist on the sa

These tests normally execute a selected set of transactions in the system under test and then s

Platform/Environment Testing

In some environments, the platform upon which the developed GUI application is deployed may

Test Types Manual or Automated?


Checklist testing Manual execution of tests of application
conventions
Checklist testing

Automated execution of tests of object states,


menus and standard features
Navigation Automated execution.
Equivalence Partitioning, Automated execution of large numbers of
Boundary Values, Decision simple tests of the same functionality or
Tables, State Transition process e.g. the 256 combinations indicated by
Testing a decision table.

Manual execution of low volume or complex


tests
Desktop Integration, C/S Automated execution of repeated tests of
Communications simple transactions

Manual tests of complex interactions


Synchronisation Manual execution.
Soak testing, Compatibility Automated execution.
testing,
Platform/environment
p the four test stages to traditional test stages as follows:

imise etc.)

, size, location

ed to be in place:

ons and call mechanisms to call the window under test.

ubs and/or drivers will be necessary to implement navigation tests. If we assume all required components are ava

ow that the application should allow and create test cases for each call.
res that the application should allow and create test cases for each call.
hould return to the ‘calling’ window and create a test case for each.
before the called window appears.

dow i.e. menus, buttons, keyboard commands. In this circumstance, consider creating one test case for each valid
on testing that should be undertaken. These tests constitute the ‘visible’ integration testing of the GUI components

aken on a forms-based application. This testing focuses very much on the behaviour of the objects within windows

a client application with these other components. Because these interfaces may be hidden or appear ‘seamless’

questions for each known interface:

roduct (i.e. a sequence of stages with different message types to test individually) or is it a direct call made once o

ch message type in circumstances where data is passed in both directions. Typically, once the nature of the inter

integration testing. This aspect covers the integration of a desktop application with the server-based processes it

h database servers. Here the particular tests to be applied should cover the various types of responses a databas

, or the database server or network has failed)


umber of rows are returned).

ommunication may be of interest. These tests might be automated, or timed using a stopwatch, to obtain indicative

e there are dependencies between different features. One scenario is when two windows are displayed, a chang
s open, then certain menu options become available (or unavailable).
otified to the application by an unsolicited event to update displayed windows.
other displayed window inconsistent.

dows. For example, changes on window A trigger changes in window B and the reverse effect also applies (chang

display the same or similar data which either cannot be displayed simultaneously, or should not change for some

change or unsolicited event and one test case for reciprocal situations.
be affected - but might be.

s. These tests are adequate for demonstrating the software meets it’s requirements and does not fail. However, G

for many hours. Applications may be comprehensively tested over a period of weeks or months but are not usual

d tool. Selected transactions are repeatedly executed and machine resources on the client (or the server) monitor

op products or simply co-exist on the same desktop, they share the same resources on the client. Compatibility Te

ions in the system under test and then switch to exercising other desktop products in turn and doing this repeated

veloped GUI application is deployed may not be under the control of the developers. PC end-users may have a va
ed components are available, what tests should we implement? We can split the task into steps:

test case for each valid path by each available means of navigation.
of the GUI components that a ‘black box’ tester should undertake.

objects within windows. The approach to testing a window is virtually the same as would be adopted when testin

or appear ‘seamless’ when working, the tester usually needs to understand a little more about the techn ical impl

direct call made once only?

e the nature of the interface is known, equivalence partitioning, boundary values analysis and other techniques ca

ver-based processes it must communicate with. The discussion of the types of test cases for this testin g is simila

of responses a database server can make. For example:

atch, to obtain indicative measures of speed.

are displayed, a change is made to a piece of data on one window and the other window needs to change to refle
ect also applies (changes in window B trigger changes on window A).

ld not change for some reason. These situations should be considered also. To derive synchronisation test cases

oes not fail. However, GUI applications have non-functional modes of failure also. We propose three addi tional GU

onths but are not usually operated for extended periods in this way. It is common for client application code and b

(or the server) monitored to identify resources that are being allocated but not returned by the application code.

client. Compatibility Tests are (usually) automated tests that aim to demonstrate that resources that are shared w

and doing this repeatedly over an extended period.

d-users may have a variety of hardware types such as 486 and Pentium machines, various video drivers, Microso
into steps:
ould be adopted when testing a single form. The traditional black-box test design techniques are directly applicabl

ore about the techn ical implementation of the interface before tests can be specified. The tester needs to know w

ysis and other techniques can be used to expand the list of test cases

ases for this testin g is similar to section 3.4 Desktop Integration, except there should be some attention paid to tes

ow needs to change to reflect the altered state of data in the database. To accommodate such dependencies, the
e synchronisation test cases:

propose three addi tional GUI test types (that are likely to be automated).

client application code and bespoke middleware to have memory-leaks. Soak tests exercise system transactions c

ed by the application code.

resources that are shared with other desktop products are not locked unnecessarily causing the system under te

arious video drivers, Microsoft Windows 3.1, 95 and NT. Most users have PCs at home nowadays and know how
niques are directly applicable in this context.

The tester needs to know what interfaces exist, what mechanisms are used by these interfaces and how the inte

be some attention paid to testing for failure of server-based processes.

date such dependencies, there is a need for the dependent parts of the application to be synchronised.
ercise system transactions continuously for an extended period in order to flush out such problems.

causing the system under test or the other products to fail.

me nowadays and know how to customise their PC configuration. Although your application may be designed to op
interfaces and how the interface can be exercised by using the application user interface.

be synchronised.
uch problems.

cation may be designed to operate on a variety of platforms, you may have to execute tests of these various config
tests of these various configurations to ensure when the software is implemented, it continues to function as des
continues to function as designed. In this circumstance, the testing requirement is for a repeatable regression test
a repeatable regression test to be executed on a variety of platforms and configurations. Again, the requirement f
ns. Again, the requirement for automated support is clear so we would normally use a tool to execute these tests
a tool to execute these tests on each of the platforms and configurations as required.
GUI Testing Checklist
CONTENTS:
Section 1 - Windows Compliance Testing
1.1. Application
1.2. For Each Window in the Application
1.3. Text Boxes
1.4. Option (Radio Buttons)
1.5. Check Boxes
1.6. Command Buttons
1.7. Drop Down List Boxes
1.8. Combo Boxes
1.9. List Boxes
Section 2 - Tester’s Screen Validation Checklist
2.1. Aesthetic Conditions
2.2. Validation Conditions
2.3. Navigation Conditions
2.4. Usability Conditions
2.5. Data Integrity Conditions
2.6. Modes (Editable Read-only) Conditions
2.7. General Conditions
2.8. Specific Field Tests
2.8.1. Date Field Checks
2.8.2. Numeric Fields
2.8.3. Alpha Field Checks
Section 3 - Other
3.1. On every Screen
3.2. Shortcut keys / Hot Keys
3.3. Control Shortcut Keys
GUI SCREEN VALIDATION CHECKLIST Page 2 of 19

Section 4 Right Click option

1. Windows Compliance

WINDOWS COMPLIANCE TESTING


For Each Application
Start Application by Double Clicking on its ICON
The Loading message should show the application name, version number, and a bigger pictorial
representation of the icon.
No Login is necessary
The main window of the application should have the same caption as the caption of the icon in
Program Manager.
Closing the application should result in an "Are you Sure" message box
Attempt to start application Twice
This should not be allowed - you should be returned to main Window
Try to start the application twice as it is loading.
On each window, if the application is busy, then the hour glass should be displayed. If there is no
hour glass (e.g. alpha access enquiries) then some enquiry in progress message should be displayed.
All screens should have a Help button, F1 should work doing the same.

For Each Window in the Application


If Window has a Minimise Button, click it.
Window Should return to an icon on the bottom of the screen
This icon should correspond to the Original Icon under Program Manager.
Double Click the Icon to return the Window to its original size.
The window caption for every application should have the name of the application and the window
name - especially the error messages. These should be checked for spelling, English and clarity ,
especially on the top of the screen. Check does the title of the window makes sense.
If the screen has an Control menu, then use all ungreyed options. (see below)
Check all text on window for Spelling/Tense and Grammar
Use TAB to move focus around the Window. Use SHIFT+TAB to move focus backwards.
Tab order should be left to right, and Up to Down within a group box on the screen. All controls
should get focus - indicated by dotted box, or cursor. Tabbing to an entry field with text in it should
highlight the entire text in the field.
The text in the Micro Help line should change - Check for spelling, clarity and non-updateable etc.
If a field is disabled (greyed) then it should not get focus. It should not be possible to select them with
either the mouse or by using TAB. Try this for every greyed control.
Never updateable fields should be displayed with black text on a grey background with a black label.
All text should be left-justified, followed by a colon tight to it.
In a field that may or may not be updateable, the label text and contents changes from black to grey
depending on the current status.
List boxes are always white background with black text whether they are disabled or not. All others
are grey.
In general, do not use goto screens, use gosub, i.e. if a button causes another screen to be displayed,
the screen should not hide the first screen, with the exception of tab in 2.0

When returning return to the first screen cleanly i.e. no other screens/applications should appear.
In general, double-clicking is not essential. In general, everything can be done using both the mouse
and the keyboard.
All tab buttons should have a distinct letter.

Text Boxes
Move the Mouse Cursor over all Enterable Text Boxes. Cursor should change from arrow to Insert
Bar. If it doesn't then the text in the box should be grey or non-updateable. Refer to previous page.
Enter text into Box
Try to overflow the text by typing to many characters - should be stopped Check the field width with
capitals W.
Enter invalid characters - Letters in amount fields, try strange characters like + , - * etc. in All fields.
SHIFT and Arrow should Select Characters. Selection should also be possible with mouse. Double
Click should select all text in box.

Option (Radio Buttons)


Left and Right arrows should move 'ON' Selection. So should Up and Down.. Select with mouse by
clicking.

Check Boxes
Clicking with the mouse on the box, or on the text should SET/UNSET the box. SPACE should do
the same

Command Buttons
If Command Button leads to another Screen, and if the user can enter or change details on the other
screen then the Text on the button should be followed by three dots.
All Buttons except for OK and Cancel should have a letter Access to them. This is indicated by a
letter underlined in the button text. The button should be activated by pressing ALT+Letter. Make
sure there is no duplication.
Click each button once with the mouse - This should activate
Tab to each button - Press SPACE - This should activate
Tab to each button - Press RETURN - This should activate
The above are VERY IMPORTANT, and should be done for EVERY command Button.
Tab to another type of control (not a command button). One button on the screen should be default
(indicated by a thick black border). Pressing Return in ANY no command button control should
activate it.
If there is a Cancel Button on the screen , then pressing <Esc> should activate it.
If pressing the Command button results in uncorrectable data e.g. closing an action step, there should
be a message phrased positively with Yes/No answers where Yes results in the completion of the
action.

Drop Down List Boxes


Pressing the Arrow should give list of options. This List may be scrollable. You should not be able to
type text in the box.
Pressing a letter should bring you to the first item in the list with that start with that letter. Pressing
‘Ctrl - F4’ should open/drop down the list box.
Spacing should be compatible with the existing windows spacing (word etc.). Items should be in
alphabetical order with the exception of blank/none which is at the top or the bottom of the list box.
Drop down with the item selected should be display the list with the selected item on the top.
Make sure only one space appears, shouldn't have a blank line at the bottom.
Combo Boxes
Should allow text to be entered. Clicking Arrow should allow user to choose from list
List Boxes
Should allow a single selection to be chosen, by clicking with the mouse, or using the Up and Down
Arrow keys.
Pressing a letter should take you to the first item in the list starting with that letter.
If there is a 'View' or 'Open' button beside the list box then double clicking on a line in the List Box,
should act in the same way as selecting and item in the list box, then clicking the command button.
Force the scroll bar to appear, make sure all the data can be seen in the box.

2. Screen Validation Checklist

AESTHETIC CONDITIONS:
1. Is the general screen background the correct colour?.
2. Are the field prompts the correct colour?
3. Are the field backgrounds the correct colour?
4. In read-only mode, are the field prompts the correct colour?
5. In read-only mode, are the field backgrounds the correct colour?
6. Are all the screen prompts specified in the correct screen font?
7. Is the text in all fields specified in the correct screen font?
8. Are all the field prompts aligned perfectly on the screen?
9. Are all the field edit boxes aligned perfectly on the screen?
10. Are all groupboxes aligned correctly on the screen?
11. Should the screen be resizable?
12. Should the screen be minimisable?
13. Are all the field prompts spelt correctly?
14. Are all character or alpha-numeric fields left justified? This is the default
unless otherwise specified.
15. Are all numeric fields right justified? This is the default unless otherwise
specified.
16. Is all the microhelp text spelt correctly on this screen?
17. Is all the error message text spelt correctly on this screen?
18. Is all user input captured in UPPER case or lower case consistently?
19. Where the database requires a value (other than null) then this should be
defaulted into fields. The user must either enter an alternative valid value
or leave the default value intact.
20. Assure that all windows have a consistent look and feel.
21. Assure that all dialog boxes have a consistent look and feel.

VALIDATION CONDITIONS:
1. Does a failure of validation on every field cause a sensible user error
message?
2. Is the user required to fix entries which have failed validation tests?
3. Have any fields got multiple validation rules and if so are all rules being
applied?
4. If the user enters an invalid value and clicks on the OK button (i.e. does
not TAB off the field) is the invalid entry identified and highlighted
correctly with an error message.?
5. Is validation consistently applied at screen level unless specifically required
at field level?
6. For all numeric fields check whether negative numbers can and should be
able to be entered.
7. For all numeric fields check the minimum and maximum values and also
some mid-range values allowable?
8. For all character/alphanumeric fields check the field to ensure that there is
a character limit specified and that this limit is exactly correct for the
specified database size?
9. Do all mandatory fields require user input?
10. If any of the database columns don’t allow null values then the
corresponding screen fields must be mandatory. (If any field which initially
was mandatory has become optional then check whether null values are
allowed in this field.)

NAVIGATION CONDITIONS:
1. Can the screen be accessed correctly from the menu?
2. Can the screen be accessed correctly from the toolbar?
3. Can the screen be accessed correctly by double clicking on a list control on
the previous screen?
4. Can all screens accessible via buttons on this screen be accessed correctly?
5. Can all screens accessible by double clicking on a list control be accessed
correctly?
6. Is the screen modal. i.e. Is the user prevented from accessing other
functions when this screen is active and is this correct?
7. Can a number of instances of this screen be opened at the same time and is
this correct?

USABILITY CONDITIONS:
1. Are all the dropdowns on this screen sorted correctly? Alphabetic sorting
is the default unless otherwise specified.
2. Is all date entry required in the correct format?
3. Have all pushbuttons on the screen been given appropriate Shortcut keys?
4. Do the Shortcut keys work correctly?
5. Have the menu options which apply to your screen got fast keys associated
and should they have?
6. Does the Tab Order specified on the screen go in sequence from Top Left
to bottom right? This is the default unless otherwise specified.
7. Are all read-only fields avoided in the TAB sequence?
8. Are all disabled fields avoided in the TAB sequence?
9. Can the cursor be placed in the microhelp text box by clicking on the text
box with the mouse?
10. Can the cursor be placed in read-only fields by clicking in the field with the
mouse?
11. Is the cursor positioned in the first input field or control when the screen is
opened?
12. Is there a default button specified on the screen?
13. Does the default button work correctly?
14. When an error message occurs does the focus return to the field in error
when the user cancels it?
15. When the user Alt+Tab’s to another application does this have any impact
on the screen upon return to The application?
16. Do all the fields edit boxes indicate the number of characters they will hold
by there length? e.g. a 30 character field should be a lot longer

DATA INTEGRITY CONDITIONS:


1. Is the data saved when the window is closed by double clicking on the
close box?
2. Check the maximum field lengths to ensure that there are no truncated
characters?
3. Where the database requires a value (other than null) then this should be
defaulted into fields. The user must either enter an alternative valid value
or leave the default value intact.
4. Check maximum and minimum field values for numeric fields?
5. If numeric fields accept negative values can these be stored correctly on
the database and does it make sense for the field to accept negative
numbers?
6. If a set of radio buttons represent a fixed set of values such as A, B and C
then what happens if a blank value is retrieved from the database? (In some
situations rows can be created on the database by other functions which
are not screen based and thus the required initial values can be incorrect.)
7. If a particular set of data is saved to the database check that each value
gets saved fully to the database. i.e. Beware of truncation (of strings) and
rounding of numeric values.

MODES (EDITABLE READ-ONLY) CONDITIONS:


1. Are the screen and field colours adjusted correctly for read-only mode?
2. Should a read-only mode be provided for this screen?
3. Are all fields and controls disabled in read-only mode?
4. Can the screen be accessed from the previous screen/menu/toolbar in readonly
mode?
5. Can all screens available from this screen be accessed in read-only mode?
6. Check that no validation is performed in read-only mode.

GENERAL CONDITIONS:
1. Assure the existence of the "Help" menu.
2. Assure that the proper commands and options are in each menu.
3. Assure that all buttons on all tool bars have a corresponding key commands.
4. Assure that each menu command has an alternative(hot-key) key sequence which
will invoke it where appropriate.
5. In drop down list boxes, ensure that the names are not abbreviations / cut short
6. In drop down list boxes, assure that the list and each entry in the list can be
accessed via appropriate key / hot key combinations.
7. Ensure that duplicate hot keys do not exist on each screen
8. Ensure the proper usage of the escape key (which is to undo any changes that have
been made) and generates a caution message “Changes will be lost - Continue
yes/no”
9. Assure that the cancel button functions the same as the escape key.
10. Assure that the Cancel button operates as a Close button when changes have be
made that cannot be undone.
11. Assure that only command buttons which are used by a particular window, or in a
particular dialog box, are present. - i.e make sure they don’t work on the screen
behind the current screen.
12. When a command button is used sometimes and not at other times, assure that it is
grayed out when it should not be used.
13. Assure that OK and Cancel buttons are grouped separately from other command
buttons.
14. Assure that command button names are not abbreviations.
15. Assure that all field labels/names are not technical labels, but rather are names
meaningful to system users.
16. Assure that command buttons are all of similar size and shape, and same font &
font size.
17. Assure that each command button can be accessed via a hot key combination.
18. Assure that command buttons in the same window/dialog box do not have
duplicate hot keys.
19. Assure that each window/dialog box has a clearly marked default value (command
button, or other object) which is invoked when the Enter key is pressed - and NOT
the Cancel or Close button
20. Assure that focus is set to an object/button which makes sense according to the
function of the window/dialog box.
21. Assure that all option buttons (and radio buttons) names are not abbreviations.
22. Assure that option button names are not technical labels, but rather are names
meaningful to system users.
23. If hot keys are used to access option buttons, assure that duplicate hot keys do not
exist in the same window/dialog box.
24. Assure that option box names are not abbreviations.
25. Assure that option boxes, option buttons, and command buttons are logically
grouped together in clearly demarcated areas “Group Box”
26. Assure that the Tab key sequence which traverses the screens does so in a logical
way.
27. Assure consistency of mouse actions across windows.
28. Assure that the color red is not used to highlight active objects (many individuals
are red-green color blind).
29. Assure that the user will have control of the desktop with respect to general color
and highlighting (the application should not dictate the desktop background
characteristics).
30. Assure that the screen/window does not have a cluttered appearance
31. Ctrl + F6 opens next tab within tabbed window
32. Shift + Ctrl + F6 opens previous tab within tabbed window
33. Tabbing will open next tab within tabbed window if on last field of current tab
34. Tabbing will go onto the 'Continue' button if on last field of last tab within tabbed
window
35. Tabbing will go onto the next editable field in the window
36. Banner style & size & display exact same as existing windows
37. If 8 or less options in a list box, display all options on open of list box - should be
no need to scroll
38. Errors on continue will cause user to be returned to the tab and the focus should
be on the field causing the error. (i.e the tab is opened, highlighting the field with
the error on it)
39. Pressing continue while on the first tab of a tabbed window (assuming all fields
filled correctly) will not open all the tabs.
40. On open of tab focus will be on first editable field
41. All fonts to be the same
42. Alt+F4 will close the tabbed window and return you to main screen or previous
screen (as appropriate), generating "changes will be lost" message if necessary.
43. Microhelp text for every enabled field & button
44. Ensure all fields are disabled in read-only mode
45. Progress messages on load of tabbed screens
46. Return operates continue
47. If retrieve on load of tabbed window fails window should not open

Specific Field Tests

Date Field Checks


Assure that leap years are validated correctly & do not cause errors/miscalculations
Assure that month code 00 and 13 are validated correctly & do not cause
errors/miscalculations
Assure that 00 and 13 are reported as errors
Assure that day values 00 and 32 are validated correctly & do not cause
errors/miscalculations
Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/
miscalculations
Assure that Feb. 30 is reported as an error
Assure that century change is validated correctly & does not cause errors/
miscalculations
Assure that out of cycle dates are validated correctly & do not cause
errors/miscalculations

Numeric Fields
Assure that lowest and highest values are handled correctly
Assure that invalid values are logged and reported
Assure that valid values are handles by the correct procedure
Assure that numeric fields with a blank in position 1 are processed or reported as an
error
Assure that fields with a blank in the last position are processed or reported as an error
an error
Assure that both + and - values are correctly processed
Assure that division by zero does not occur
Include value zero in all calculations
Include at least one in-range value
Include maximum and minimum range values
Include out of range values above the maximum and below the minimum
Assure that upper and lower values in ranges are handled correctly

Alpha Field Checks


Use blank and non-blank data
Include lowest and highest values
Include invalid characters & symbols
Include valid characters
Include data items with first position blank
Include data items with last position blank

VALIDATION TESTING - STANDARD ACTIONS

On every Screen
Add
View
Change
Delete
Continue
Add
View
Change
Delete
Cancel
Fill each field - Valid data
Fill each field - Invalid data

Different Check Box combinations

Scroll Lists
Help
Fill Lists and Scroll
Tab
Tab Order
Shift Tab
Shortcut keys - Alt + F

SHORTCUT KEYS / HOT KEYS


CONTROL SHORT KEYS

Recommended CTRL+Letter Shortcuts

Section 4 Right Click option

Copy character values or numeric values with keyboard(ctrl+c) or mouse and paste it in the textbox with k
be displayed.
ect them with

be displayed,

h the mouse

on the other
there should
d paste it in the textbox with keyboard or mouse.
Classification of Errors by Severity
Often the severity of a software defect can vary even though the software never changes. The reason being is
For example, the severity of the Pentium’s floating-point defect changes from system to system. On some sys
Another problem (which occurs regularly) is that the definitions of the severity levels (or categories) themselve
Therefore, the system itself determines the severity of a defect based on the context for which the defect appli
I have attached two sample classification methods – a 3 level classification method, and a more comprehensive

3 Level Error Classification Method


Errors, which are agreed as valid, will be categorised as follows :-

• Category A - Serious errors that prevent System test of a particular function continuing or serious

• Category B - Serious or missing data related errors that will not prevent implementation.

• Category C - Minor errors that do not prevent or hinder functionality.

Explanation of Classifications

1. An "A" bug is a either a showstopper or of such importance as to radically affect the functionality of the

§ If, because of a consistent crash during processing of a new application, a user could not complete
§ Incorrect data is passed to system resulting in corruption or system crashes

Example of severally affected functionality:


§ Calculation of repayment term/amount are incorrect
§ Incorrect credit agreements produced

2. Bugs would be classified as "B" where a less important element of functionality is affected, e.g.:

§ a value is not defaulting correctly and it is necessary to input the correct value
§ data is affected which does not have a major impact, for example - where an element of a custom
§ there is an alternative method of completing a particular process - e.g. a problem might occur whic
§ Serious cosmetic error on front-end.

3. "C" type bugs are mainly cosmetic bugs i.e.:

§ Incorrect / misspelt text on screens


§ drop down lists missing or repeating an option
5 Level Error Classification Method
1. Defects
Catastr that
ophic: could (or
did)
cause
disastrou
s
conseque
nces for
the
system
in
question.

E.g.)
critical
loss of
data,
critical
loss of
system
availabilit
y, critical
loss of
security,
critical
loss of
safety,
etc.

2. Defects
Severe: that
could (or
did)
cause
very
serious
conseque
nces for
the
system
in
question.
E.g.) A
function
is
severely
broken,
cannot
be used
and
there is
no
workarou
nd.

3. Defects
Major: that
could (or
did)
cause
significan
t
conseque
nces for
the
system
in
question
-A
defect
that
needs to
be fixed
but there
is a
workarou
nd.

E.g. 1.)
losing
data
from a
serial
device
during
heavy
loads.
E.g. 2.)
Function
badly
broken
but
workarou
nd exists

4. Defects
Minor: that
could (or
did)
cause
small or
negligibl
e
conseque
nces for
the
system
in
question.
Easy to
recover
or
workarou
nd.
E.g.1)
Error
message
s
misleadin
g.

E.g.2)
Displayin
g output
in a font
or format
other
than
what the
customer
desired.
5. No Trivial
Effect: defects
that can
cause no
negative
conseque
nces for
the
system
in
question.
Such
defects
normally
produce
no
erroneou
s
outputs.

E.g.1)
simple
typos in
documen
tation.

E.g.2)
bad
layout or
mis-
spelling
on
screen.
es. The reason being is that a software defect’s severity depends on the system in which it runs.
o system. On some systems, the severity is small; whereas on other systems, the severity is high.
or categories) themselves change depending on the type of system. For example, a catastrophic defect in a nuclear system
r which the defect applies. The context makes all the difference in how to classify a defect’s severity.
d a more comprehensive 5 level classification method, which I hope you may find useful.

on continuing or serious data type error

mplementation.

t the functionality of the system i.e. :

user could not complete that application.

is affected, e.g.:

an element of a customer application was not propagated to the database


roblem might occur which has a work-around.
efect in a nuclear system means that the fault can result in death or environmental harm; a catastrophic defect in a datab
rophic defect in a database system means that the fault can (or did) cause the loss of valuable data.
What is Software Testing...?
1. What is Software Testing

2. Why Testing CANNOT Ensure Quality

3. What is Software Quality?

4. What is Quality Assurance?

5. Software Development & Quality Assurance

6. The difference between QA & Testing

7. The Mission of Testing

1. What is Software Testing?

Software testing is more than just error detection;

Testing software is operating the software under controlled conditions, to (1) verify that it behaves “as specifie

1. Verification is the checking or testing of items, including software, for conformance and consistency by eva

2. Error Detection: Testing should intentionally attempt to make things go wrong to determine if things happ

3. Validation looks at the system correctness – i.e. is the process of checking that what has been specified is

In other words, validation checks to see if we are building what the customer wants/needs, and verification che

The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of analysing a

Remember: The purpose of testing is verification, validation and error detection in order to find problems – and

2. Why Testing CANNOT Ensure Quality

Testing in itself cannot ensure the quality of software. All testing can do is give you a certain level of assurance

3. What is Software “Quality”?

Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expe
However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall influence in the

4. What is “Quality Assurance”?

“Quality Assurance” measures the quality of processes used to create a quality product.

Software Quality Assurance (‘SQA’ or ‘QA’) is the process of monitoring and improving all activities associated w

It involves the entire software development process - monitoring and improving the process, making sure that

Organisations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re the co

5. Quality Assurance and Software Development

Quality Assurance and development of a product are parallel activities. Complete QA includes reviews of the de

A note about quality assurance: The role of quality assurance is a superset of testing. Its mission is to help min

6. What’s the difference between QA and testing?

Simply put:

TESTING means “Quality Control”; and


QUALITY CONTROL measures the quality of a product; while
QUALITY ASSURANCE measures the quality of processes used to create a quality product.

7. The Mission of Testing

In well-run projects, the mission of the test team is not merely to perform testing, but to help minimise the ris
t it behaves “as specified”; (2) to detect errors, and (3) to validate that what has been specified is what the user actually

e and consistency by evaluating the results against pre-specified requirements. [Verification: Are we building the system r

determine if things happen when they shouldn’t or things don’t happen when they should.

at has been specified is what the user actually wanted. [Validation: Are we building the right system?]

eds, and verification checks to see if we are building that system correctly. Both verification and validation are necessary,

e process of analysing a software item to detect the differences between existing and required conditions (that is defects/e

r to find problems – and the purpose of finding those problems is to get them fixed.

ertain level of assurance (confidence) in the software. On its own, the only thing that testing proves is that under specific

quirements and/or expectations, and is maintainable.


overall influence in the scheme of things. A wide-angle view of the ‘customers’ of a software development project might in

all activities associated with software development, from requirements gathering, design and reviews to coding, testing an

ocess, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found an

ometimes they’re the combined responsibility of one group or individual. Also common are project teams that include a mi

cludes reviews of the development methods and standards, reviews of all the documentation (not just for standardisation

ts mission is to help minimise the risk of project failure. QA people aim to understand the causes of project failure (which

to help minimise the risk of product failure. Testers look for manifest problems in the product, potential problems, and the
what the user actually wanted.

we building the system right?]

alidation are necessary, but different components of any testing activity.

ditions (that is defects/errors/bugs) and to evaluate the features of the software item.

es is that under specific controlled conditions, the software functioned as expected by the test cases executed.
opment project might include end-users, customer acceptance testers, customer contract officers, customer management

ws to coding, testing and implementation.

at problems are found and dealt with, at the earliest possible stage. Unlike testing, which is mainly a ‘detection’ process, Q

teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by pro

just for standardisation but for verification and clarity of the contents also). Overall Quality Assurance processes also includ

f project failure (which includes software errors as an aspect) and help the team prevent, detect, and correct the problem

ential problems, and the absence of problems. They explore, assess, track, and report product quality, so that others in th
s executed.
customer management, the development organisation’s management/accountants/testers/salespeople, future software m

a ‘detection’ process, QA is ‘preventative’ in that it aims to ensure quality in the methods & processes – and therefore red

cesses monitored by project managers or quality managers.

nce processes also include code validation.

and correct the problems. Often test teams are referred to as QA Teams, perhaps acknowledging that testers should consid

lity, so that others in the project can make informed decisions about product development. It's important to recognise tha
eople, future software maintenance engineers, stockholders, magazine reviewers, etc. Each type of ‘customer’ will have th

sses – and therefore reduce the prevalence of errors in the software.

hat testers should consider broader QA issues as well as testing.

portant to recognise that testers are not out to "break the code." We are not out to embarrass or complain, just to inform.
f ‘customer’ will have their own view on ‘quality’ - the accounting department might define quality in terms of profits while

omplain, just to inform. We are human meters of product quality.


in terms of profits while an end-user might define quality as user-friendly and bug-free.
Software Quality Assurance & Usability Testing

The role of User Testing in Software Quality Assurance.

Table of Contents:

1 The Role of User Testing in Software Quality Assurance

1.1. Introduction

1.2. What is 'Usability Testing'

1.3. Why Usability Testing should be included as an element of the testing cycle
2 How to Approach Usability Testing

2.1. How to Implement Usability Testing

2.2. The Benefits of Usability Testing

2.3. The Role and benefits of "Usability Testers"


3 Summary
4 Sources of Reference & Internet Links

1. The role of User Testing in Software Quality Assurance.

1.1. Introduction

My first introduction to Usability Testing came when I was a new tester in the Lending department of a large fin
A post mortem was then carried out on the software, and I was involved, as a representative of the test team.

The lessons learnt from that excercise were then implemented into any further developments, and saw the add

1.2. What is 'Usability Testing'

'Usability Testing' is defined as: "In System Testing, testing which attempts to find any human-factor problems

1.3. Why Usability Testing should be included as an element of the testing cycle.

I believe that QA have a certain responsibility for usability testing. There are several factors involved, but the m

To demonstrate, assume a new application is developed, that is exactly, 100%, in accordance with the design s

I remember a diagram that vividly showed this - it showed the design of a swing, with sections on "what the cu

This is especially true where the business processes that drive the design of the new application are very comp

Secondly, when a totally new or custom application is being developed, how many of the coders themselves (1

Even if the testers are indeed experts in their area, they may miss the big picture, so I think that usability test

Thirdly, apart from the usual commercial considerations, the success of some new software will depend on how

2. How to approach Usability Testing

2.1. How to Implement Usability Testing

The best way to implement usability testing is two fold - firstly from a design & development perspective, then

From a design viewpoint, usability can be tackled by (1) Including actual Users as early as possible in the de

(2) Following on from the screen reviews, standards should be documented i.e. Screen Layout, Labelling/Nami

Where an existing system or systems are being replaced or redesigned, usability issues can be avoided by usin

3). Including provisions for usability within the design specification will assist later usability testing. Usually for

An example of a usability consideration within the functional specification may be as simple as specifying a min

4). At the unit testing stage, there should be an official review of the system - where most of those issues can

5). All the previous actions could be performed at an early stage if Prototyping is used. This is probably the bes

6). From a testing viewpoint, usability testing should be added to the testing cycle by including a formal "Us
User Acceptance Testing (UAT) is an excellent exercise, because not only will it give you there initial impression

(7) Another option to consider is to include actual users as testers within the test team. One financial organizat

8). The final option that may be to include user testers who are eventually going to be (a) using it themselves;

2.2. The Benefits of Usability Testing

The benefits of having had usability considerations included in the development of computer software are imme

Better quality software.


Software is easier to use.
Software is more readily accepted by users.
Shortens the learning curve for new users.

2.3. The Role and benefits of "Usability Testers"

Apart from discovering and preventing possible usability issues, the addition of 'Usability Testers' to the test te

They can also help to:

Refocus the testers and increase their awareness to usability issues, by providing a fresh viewpoint

Provide and share their expert knowledge - training the testers to the background and purpose of the system

Provide a "realistic" element to the testing, so that test scenarios are not just "possible permutations".

3. Summary:

1 Usability evaluation should be


incorporated earlier in the
software development cycle to
minimize resistance to
changes in a hardened user
interface;
2 Organizations should have an
independent usability
evaluation of software
products to avoid the
temptation to overlook
problems to release the
product;
3 Multiple categories of
dependent measures should
be employed in usability
testing because subjective
measurement is not always
consonant with user
performance; and
4 Even though usability testing
at the later stages of
development may not impact
software changes, it is useful
to point out areas where
training is needed to
overcome deficiencies in the
software.

In my experience, the greater the involvement of key users, the more pleased they will be with the end produc

4. Sources of Reference:

4.1. Publications

"The Case for Independent Software Usability Testing: Lessons Learned from a Successful Intervention". Author: David W. Biers.

Originally published: Proceedings of the Human Factors Society 33rd Annual Meeting, 1989, pp. 1218-1222

Republished: G. Perlman, G. K. Green, & M. S. Wogalter (Eds.) Human Factors Perspectives on Human-Computer Interaction: Selections from Pr

Http://www.acm.org/~perlman/hfeshci/Abstracts/89:1218-1222.html

NASA Usability Testing Handbook

http://aaa.gsfc.nasa.gov/ViewPage.cfm?selectedPage=48&selectedType=Product

1. A
Practica
l Guide
to
Usabilit
y
Testing.
Joseph
S.
Dumas &
Janice C.
Redish.
Norwood
, NJ:
Ablex
Publishin
g, 1993.
ISBN 0-
89391-
991-8.
This
step-by-
step
guide
provides
checklist
s and
offers
insights
for every
stage of
usability
testing.
2 Usabilit
y
Enginee
ring.
2

Jakob
Nielsen.
Boston,
MA:
Academi
c Press,
1993.
ISBN 0-
12-
518405-
0. This
book
immediat
ely sold
out when
it was
first
publishe
d. It is
an
practical
handboo
k for
people
who
want to
evaluate
systems.
3 Usabilit
y
Inspecti
on
Method
s.
3

Jakob
Nielsen &
Robert L.
Mack
(Eds.)
New
York:
John
Wiley &
Sons,
1994.
ISBN 0-
471-
01877-5.
This
book
contains
chapters
contribut
ed by
experts
on
usability
inspectio
ns
methods
such as
heuristic
evaluatio
n,
cognitive
walkthro
ughs,
and
others.
4 Cost-
Justifyi
ng
Usabilit
y.
4

Randolph
G. Bias &
Deborah
J.
Mayhew
(Eds.)
Boston:
Academi
c Press,
1994.
ISBN 0-
12-
095810-
4. This
edited
collection
contains
14
chapters
devoted
to the
demonst
ration of
the
importan
ce of
usability
evaluatio
n to the
success
of
software
develop
ment.

5 Usabilit
y in
Practice
: How
Compan
ies
Develop
User-
Friendly
Product
s
Michael
E.
Wiklund
(Ed.)
Boston:
Academi
c Press,
1994.
ISBN 0-
12-
751250-
0. This
collection
of
contribut
ed
chapters
describes
usability
practices
of 17
compani
es:
American
Airlines,
Ameritec
h, Apple,
Bellcore,
Borland,
Compaq,
Digital,
Dun &
Bradstre
et,
Kodak,
GE
Informati
on
Services,
GTE
Labs, H-
P, Lotus,
Microsoft
, Silicon
Graphics,
Thompso
n
Consume
r
Electroni
cs, and
Ziff
Desktop
Informati
on. It
amounts
to the
broadest
usability
ending department of a large financial institution. They had developed the first of a set of loan management applications (
representative of the test team. The investigation discovered that the software was not "user-friendly". Yet I, as a tester, h

developments, and saw the addition of "usability testing" to the system test cycle. The software was re-worked, and was

find any human-factor problems". [1] A better description is "testing the software from a users point of view". Essentially i

ing cycle.

veral factors involved, but the main reason is the 'perspective differences' or different viewpoints of the various teams inv

in accordance with the design specifications - yet, unfortunately, it is not fit for use - because it may be so difficult/awkw

g, with sections on "what the customer ordered", "What the development team built", "What the engineers installed" etc.,

e new application are very complex (for example bespoke financial applications).

any of the coders themselves (1) have actual first hand experience of the business processes/rules that form the basis of t

ure, so I think that usability testing is a sub-specialty that often is not best left to the average tester. Only some specific p

ew software will depend on how well it is received by the public - whether they like the application . Obviously if the s

development perspective, then from a testing perspective.

ers as early as possible in the design stage. If possible, a prototype should be developed - failing that, screen layouts and

Screen Layout, Labelling/Naming conventions etc. These should then be applied throughout the application.

y issues can be avoided by using similar screen layouts - if they are already familiar with the layout the implementation o

ter usability testing. Usually for new application developments, and nearly always for custom application developments, th

be as simple as specifying a minimum size for the 'Continue' button.

where most of those issues can more easily be dealt with. At this stage, with screen layout & design already reviewed, the

is used. This is probably the best way to identify any potential usability/operability problems. You can never lessen the im

g cycle by including a formal "User Acceptance Test". This is done by getting several actual users to sit down with the soft
give you there initial impression of the system and tell you how readily the users will take to it, but this way it will tell you

st team. One financial organization I was involved with reassigned actual users as "Business Experts" as members of the t

g to be (a) using it themselves; and/or (b) responsible for training and effectively "selling" it to the users.

of computer software are immense, but often unappreciated. The benefits are too numerous to list - I'd say it's similar to

'Usability Testers' to the test team can have a very positive effect on the team itself. Several times I have seen that teste

ding a fresh viewpoint

ound and purpose of the system

"possible permutations".
they will be with the end product. Getting management to commit their key people to this effort can be difficult, but it ma

rvention". Author: David W. Biers.

-Computer Interaction: Selections from Proceedings of Human Factors and Ergonomics Society Annual Meetings, 1983-1994, Santa Monica, CA: HFES, 1995, pp. 1
nagement applications (and almost as an afterthought decided they'd better test it). The application was very good, and o
dly". Yet I, as a tester, had not considered usability or operability to be a problem. We then sat down with several of the u

as re-worked, and was re-released. The revamped version, although containing mostly cosmetic (non-functional) changes

nt of view". Essentially it means testing software to prove/ensure that it is 'user-friendly', as distinct from testing the func

of the various teams involved in the development of the software.

may be so difficult/awkward to use, or it ends up so complicated that the users don't want it or won't use it. Yet, it is what

ngineers installed" etc., with the effect of illustrating the different perspectives of the various people involved.

that form the basis of the application being developed; and/or (2) how many of the coders will actually end up using the

er. Only some specific personnel should be responsible for doing Usability Testing.

tion . Obviously if the s/w is bug ridden then the popularity of the s/w will suffer; aside from that, if it is a high quality de

hat, screen layouts and designs should be reviewed on-screen and any problems highlighted.. The earlier that potential us

ut the implementation of the new system will present less of a challenge, as it will be more easily accepted (provided of co

cation developments, the design team should either have an excellent understanding of the business processes/rules/logic

n already reviewed, the focus should be on how a user navigates through the system. This should identify any potential is

can never lessen the importance of user-centered design, but you can solve usability problems before they get to the QA s

o sit down with the software and attempt to perform "normal" working tasks, when the software is near release quality. I s
ut this way it will tell you whether the end product is a closer match to their expectations and there are fewer surprises. (E

rts" as members of the test team. I found their input as actual "tester users" was invaluable.

st - I'd say it's similar to putting the coat of paint on a new car - the car itself will work without the paint, but it doesn't loo

s I have seen that testers become too familiar with the "quirks" of the software - and not report a possible error or usabilit
an be difficult, but it makes for a better product in the long run.

Monica, CA: HFES, 1995, pp. 191-195.


on was very good, and of high quality. Technologically speaking the software was a big step forward, away from paper for
wn with several of the users, and got them to go through the application with us screen by screen. This showed that teste

non-functional) changes proved to be a success; although the damage was done - there was a little more reluctance to acc

ct from testing the functionality of the software. In practical terms it includes ergonomic considerations, screen design, sta

't use it. Yet, it is what the design specified. This has happened, and will happen again.

le involved.

tually end up using the finished product ? Answer: Usually none. (3) How many of the test team do have first hand experi

if it is a high quality development the popularity of the s/w will still depend on the usability (albeit to a lesser degree). It w

earlier that potential usability issues are discovered the easier it is to fix them.

ccepted (provided of course, that that is not why the system is being replaced).

ss processes/rules/logic behind the system being developed; and include users with first hand knowledge of same. Howev

identify any potential issues such as having to open an additional window where one would suffice. More commonly thoug

ore they get to the QA stage (thereby cutting the cost of rebuilding the product to correct the problem) by using prototype

near release quality. I say "normal" working tasks because testers will have been testing the system from/using test case
are fewer surprises. (Even though usability testing at the later stages of development may not impact software changes,

paint, but it doesn't look good. To summarise the benefits I would just say that it makes the software more "user friendly

possible error or usability issue. Often this is due to the tester thinking either "It's always been like that" or "isn't that th
rd, away from paper forms and huge filing cabinets, to an online system which would manage and track all actions previo
This showed that testers have a different viewpoint than users. I was so familiar with the system that I didn't consider so

e more reluctance to accept the software because they had "heard that it wasn't much good".

tions, screen design, standardisation etc.

o have first hand experience or the expert knowledge of the underlying business logic/processes ? Answer: Usually minima

to a lesser degree). It would be a pity (but it wouldn't be the first time) that an application was not a success because it w

wledge of same. However, although they design the system, they rarely specifically include usability provisions in the spec

. More commonly though, the issues that are usually identified at this stage relate to the default or most common actions.

lem) by using prototypes (even paper prototypes) and other "discount usability" testing methods.

em from/using test cases - i.e. not from a users viewpoint. User testers must always take the customer's point of view in t
pact software changes, it is useful to point out areas where training is needed to overcome deficiencies in the software.

ware more "user friendly". The end result will be:

e that" or "isn't that the way it's supposed to be ?". These types of problem can be allieviated by including user testers in
track all actions previously written by hand. When version 1.0 was ready, it went into one of the larger regional offices on
that I didn't consider some convoluted key strokes to be a problem, until I saw them from a new users perspective. It tur

Answer: Usually minimal.

ot a success because it wasn't readily accepted - because it was not user friendly, or because it was too complex or difficul

ty provisions in the specifications.

r most common actions. For example, where a system is designed to cope with multiple eventualities and thus there are 1

omer's point of view in their testing.


ncies in the software.

including user testers in the test team.


arger regional offices on pilot, the intention being to then gradually release it nationally. However, the pilot implementation
sers perspective. It turned out to be a very important lesson for me - and indeed would be very educational for any teste

s too complex or difficult to use.

es and thus there are 15 fields on the main input screen - yet 7 or 8 of these fields are only required in rare instances. Th
the pilot implementation was a disaster, and the release was postponed. The intended users wouldn't use the application,
ducational for any tester or developer.

ed in rare instances. These fields could then be set as hidden unless triggered, or moved to another screen altogether.
dn't use the application, and went back to doing things by hand. It quickly became clear that the reason was not that the s
er screen altogether.
eason was not that the software didn't work, but that they couldn't work the software. At first it was assumed that this wa
as assumed that this was because it was such a technological leap forward - i.e. they were unfamiliar with computers as a
ar with computers as a whole, resistant to change and reluctant to accept new technology. However, this was not the main
er, this was not the main problem - the problem was with the software itself.
Microsoft Word Files - Table of Contents

Acceptance Form
Acceptance Test Plan
Action Item Log
Change Control Form
Change Control Log
Change Control Log - Detailed
Change Request Form
Data Access Control Form
Enhancement Request Form
Installation Completion Form
Issue Log
QA / Program Manager Checklist
Quality Log
Release Control Form
Requirements Testing Report
Risk Log
Risk Management Plan Form
Software Test Plan Template
System Final Release Sign-off Form
System Requirements Sign-off Form
System Test Cycle Sign-off Form
System Test Environment Sign-off Form
System Test Plan Sign-off Form
System Test Sign-off Form
Test Case Template
Test Case Validation Log
Test Plan Review Checklist
Test Plan Task Preparation
Test Record
Test Script Allocation Form
Test Script
Team Roles and Responsibilities Form
Team Training Requirements Form
Unit Test Plan
User Acceptance Test (UAT) Report
Version Control Log
Web Usability Test Report

Microsoft Excel Files - Table of Contents


Microsoft Excel - Table of Contents

Worksheet

Action Item Log

Change Control
Log

Change History
Log

Data Access
Control

Log Status

Failed Scripts

Open Issues

Quality Log

Risk Log

Roles and
Responsibilities
Status Report

Test Script

Test Script List

Task Preparation

Test Case

Test Tracking
Report

Validation Log

Version Control
Log

Web Usability
Report
FAQs

FAQ: What file formats are the templates?

All files are in Microsoft Word and are Anti-Virus free!

FAQ: How soon can I download them?

Immediately after you pay online, you are sent to a page where you can download the templates
online.
FAQ: What is the End User License Agreement?
oft Word Files - Table of Contents

ptance Form
ptance Test Plan
n Item Log
ge Control Form
ge Control Log
ge Control Log - Detailed
ge Request Form
Access Control Form
ncement Request Form
lation Completion Form
Log
Program Manager Checklist
ty Log
se Control Form
rements Testing Report
Log
Management Plan Form
are Test Plan Template
m Final Release Sign-off Form
m Requirements Sign-off Form
m Test Cycle Sign-off Form
m Test Environment Sign-off Form
m Test Plan Sign-off Form
m Test Sign-off Form
Case Template
Case Validation Log
Plan Review Checklist
Plan Task Preparation
Record
Script Allocation Form
Script
Roles and Responsibilities Form
Training Requirements Form
Test Plan
Acceptance Test (UAT) Report
on Control Log
Usability Test Report

oft Excel Files - Table of Contents


Microsoft Excel - Table of Contents

Use this template to:

Allocate an action item number, description, status (Low/Medium/High), date reported,


resource it was assigned to, its due date, and other additional comments
Identify the basis for the change; confirm whether it is disapproved or approved.
Include the Software Change Request Number (SCR) #, Requirements (Rqmnt) #,
date submitted, and whether it is approved/not approved, on hold, in progress, or
cancelled.

Describe the date, author and history.

For each Person or Group, identify the individuals who have access to the test cases
and their status, e.g. all DEV has access to the Test Cases for Web Project 22B.

For each log, identify its Log ID, the nature of the risk/issue, and whether it is Open or
Closed.

Identify the Area where the script failed, and provide details on the Set, Date, with a
description of the error and its Severity, e.g. minor error, major error etc.

Identify all the open issues by number (#); list when they were created; who raised
them, provide a brief description with details of its Assigned/Target Date/Category,
Status (e.g. Open of Closed), Resolution and its Resolution Date.

When performing the checks, identify the Ref #, its Module, the Method of Checking,
name of the Tester, its Planned Date, Date Completed, details of the Result, the Action
Items (i.e. tasks) and the Sign-off Date.

Identify the Risk Number, its Date, Type (e.g Business/Project/Stage) a brief
description, Likelihood %, Severity (e.g. Low or Medium) Impact, Action Required, who
is was Assigned To and its Status.

Identify all the roles on the project, with details of their responsibilities. Include contact
names and email addresses.
Identify the function that is under test, enter its Business value on a scale of 1-5 with 1
the lowest value and 5 the highest (or whichever numbering system you wish to use);
details of the problem severity broken out by a factor of 1 to 5. The total number of
issues (a.k.a anomalies) is calculated in the final column.

Enter the Area under test, its Set, whether it has Passed or Failed, with a Description of
the Error and its Severity, e.g. L/M/H

Enter the Area under test, its Test Case ID, Bug ID, Bug Fixed Date, Bug Fixed By and
Fix verified By details.
Use this checklist to prepare for the Test Plan: Review Software Requirements
Specifications, Identify functions/modules for testing, Perform Risk Analysis. The
second checklist is for the Test Plan Population and helps to: Identify/Prioritize features
to be tested, Define Test Strategy; Identify Test Tools; Identify Resource Requirements
etc.

This Test Case template is used to capture the name of the Test Case; its Description;
Start Conditions; Pass Criteria; Tester Name; Build Number; identify the Test Data
Used; Steps, Action and Expected Result.

Use this to track the progress of the software tests each Week, capture which are
Planned, were Attempted and numbers that are Successful.

Use this to capture the Project’s Completion Date; Test Event; Test Case ID; Test
Date; Tester; Test Results and Status

Use this to track the Product’s Version No., its Date, and Approvals.

Use this to analyze the usability of a web project, such as the performance of its
Navigation, Graphics, Error Messages, and the quality of its Microcontent
What file formats are the templates?

are in Microsoft Word and are Anti-Virus free!

ow soon can I download them?

iately after you pay online, you are sent to a page where you can download the templates

What is the End User License Agreement?


Unit Test Plan
Module ID: _________

1. Module Overview

Briefly define the purpose of this module. This may require only a single phrase: i.e.: calculates overtime pay
amount, calculates equipment depreciation, performs date edit validation, or determines sick pay eligibility, etc.

1.1 Inputs to Module

[Provide a brief description of the inputs to the module under test.]

1.2 Outputs from Module

[Provide a brief description of the outputs from the module under test.]

1.3 Logic Flow Diagram

[Provide logic flow diagram if additional clarity is required.]


2. Test Data

(Provide a listing of test cases to be exercised to verify processing logic.)

2.1 Positive Test Cases

[Representative data samples should provide a spectrum of valid field and processing values including "Syntactic"
permutations that relate to any data or record format issues. Each test case should be numbered, indicate the
nature of the test to be performed and the expected proper outcome.]

2.2 Negative Test Cases

[The invalid data selection contains all of the negative test conditions associated with the module. These include
numeric values outside thresholds, invalid Characters, invalid or missing header/trailer record, and invalid data
structures (missing required elements, unknown elements, etc.)

3. Interface Modules

[Identify the modules that interface with this module indicating the nature of the interface: outputs data to, receives
input data from, internal program interface, external program interface, etc. Identify sequencing required for
subsequent string tests or sub-component integration tests.]

4. Test Tools

[Identify any tools employed to conduct unit testing. Specify any stubs or utility programs developed or used to
invoke tests. Identify names and locations of these aids for future regression testing. If data supplied from unit test
of coupled module, specify module relationship.]
5. Archive Plan

[Specify how and where data is archived for use in subsequent unit tests. Define any procedures required to obtain
access to data or tools used in the testing effort. The unit test plans are normally archived with the corresponding
module specifications.]

6. Updates
[Define how updates to the plan will be identified. Updates may be required due to enhancements, requirements
changes, etc. The same unit test plan should be re-used with revised or appended test cases identified in the update
section.]
Unit Test Plan
Program ID: ___________

ew

urpose of this module. This may require only a single phrase: i.e.: calculates overtime pay
equipment depreciation, performs date edit validation, or determines sick pay eligibility, etc.

ule

scription of the inputs to the module under test.]

Module

scription of the outputs from the module under test.]

agram

diagram if additional clarity is required.]


f test cases to be exercised to verify processing logic.)

Cases

a samples should provide a spectrum of valid field and processing values including "Syntactic"
elate to any data or record format issues. Each test case should be numbered, indicate the
be performed and the expected proper outcome.]

Cases

election contains all of the negative test conditions associated with the module. These include
side thresholds, invalid Characters, invalid or missing header/trailer record, and invalid data
required elements, unknown elements, etc.)

les

es that interface with this module indicating the nature of the interface: outputs data to, receives
ernal program interface, external program interface, etc. Identify sequencing required for
ests or sub-component integration tests.]

employed to conduct unit testing. Specify any stubs or utility programs developed or used to
fy names and locations of these aids for future regression testing. If data supplied from unit test
specify module relationship.]
here data is archived for use in subsequent unit tests. Define any procedures required to obtain
ols used in the testing effort. The unit test plans are normally archived with the corresponding
ns.]

s to the plan will be identified. Updates may be required due to enhancements, requirements
ame unit test plan should be re-used with revised or appended test cases identified in the update
Some suggested starting points for a reader-friendliness checklist include:

Clarity of Communication

Does the site convey a clear sense of its intended audience?

Does it use language in a way that is familiar to and comfortable for its readers?

Is it conversational in its tone?

Accessibility

Is load time appropriate to content, even on a slow dial-in connection?

Is it accessible to readers with physical impairments?

Is there an easily discoverable means of communicating with the author or


administrator?
Consistency

Does the site have a consistent, clearly recognizable "look-&-feel"?

Does it make effective use of repeating visual themes to unify the site?

Is it visually consistent even without graphics?

Navigation

Does the site use (approximately) standard link colors?

Are the links obvious in their intent and destination?

Is there a convenient, obvious way to maneuver among related pages, and between
different sections?
Design & maintenance

Does the site make effective use of hyperlinks to tie related items together?

Are there dead links? Broken CGI scripts? Functionless forms?

Is page length appropriate to site content?

Visual Presentation

Is the site moderate in its use of color?

Does it avoid juxtaposing text and animations?

Does it provide feedback whenever possible?

(for example, through the use of an easily recognizable ALINK color, or a "reply"
screen for forms-based pages)

When testing a web based program, I look at testing the following:


How
Checkdoes the Web
its content forsite look?spelling
layout,
mistakes, etc.
How is the flow or logic (Organised)? on
the pages
Overall presentation
Link testing?
Navigation testing
Memory demands
No needless big files, etc.
How fast is the system?

Are there any processes running?

Any forms, reports, queries? Then test


them accordingly

Check HTML aspects

Time testing (meaning how fast it is)

Load testing .. and other things in usability,


system testing?

Das könnte Ihnen auch gefallen