Sie sind auf Seite 1von 36

BIO

PRESENTATION

W3
PAPER

5/18/2005 11:30 AM

A SIMPLE (AND
REVOLUTIONARY) TEST
AUTOMATION DASHBOARD
Kelly Whitmill
IBM Corporation

International Conference On
Software Testing Analysis & Review
May 16-20, 2005
Orlando, FL USA
Kelly Whitmill
Kelly Whitmill has over 20 years experience in software testing. Most of that time his role has
been that of a team lead with responsibility for finding and implementing effective methods and
tools to accomplish the required tests. He is particularly interested in practical approaches that
can be effective in environments with limited resources. He has a strong interest in test
automation. He has worked on PC-based, Unix-based, and Mainframe-based projects. He
currently works for the IBM Printing Systems Division in Boulder, Colorado.
Test Automation
Dashboard

Kelly Whitmill
IBM Printing Systems
whitmill@us.ibm.com
Purpose Automation Dashboard
IPS GUI AUTOM
MATION IMPACT
Mission
100 % of Test Effort 30
Test Phase

Find Bugs
80
Verification Type Test Planning No
% Importance

60 None No Test Management No


Master-based No Test Generation Yes

Regression

Management
Coverage
40
Hard-coded No Test Setup No

Test

Process
20 Data-driven No Test Execution Yes

Certify

Setup
0 Heuristic Yes Test Verification Yes

Quality Characteristics

Satisfaction Level

Reliability
100
90 Test Impact
80
Importance

70
Functionality

60 Verification Level
50
Performance
Installability

40 0 10 20 30 40 50 60 70 80 90 100
30

Usability

Process
20 Low=1 High=100
10
0

MAINTAINABILITY
ROI
Quality Processes Support Level
Requirements Review No None/As-is No
Opportunity Costs Documented
Design Review No Yes
Re-use (projects) As-available (on
Re-use (releases) Code Review No team) Yes
As-available (not
Development Cost
Unit Test No on team) No
Maintenance Cost Bug fixes only
Function Test No Yes
Information Value System Test No Full Support No
Efficiency
Chance of Failure
Effectiveness Architecture
Low:<=2 Med:3-7 High:>7
Modular (Tasks) Yes
0 10 20 30 40 50 60 70 80 90 100 Modular (Function) Yes
Low =1 High=100 Data-Driven No ErrorFactor
Keyword Driven No UseFrequency
Model Based Yes 0 2 4 6 8 10

Test Case Library No


Automation Dashboard
„ Principles
„ Good models lead to good testing
„ Visibility encourages participation and
performance
„ Measurement drives improvement
„ Simplicity makes it achievable
Purpose IPS GUI AUTOMATION IMPACT
Mission % of Test Effort 30
100 Test Phase
Verification Type Test Planning No

Find Bugs
80
% Importance

None No Test Management No


60
Master-based No Test Generation Yes

Regression

Management
Coverage
40 Hard-coded No Test Setup No
Data-driven No Test Execution Yes

Test

Process
20
Certify

Setup
Heuristic Yes Test Verification Yes
0

Quality Characteristics
Satisfaction Level

Reliability
100 Test Impact
90
80
Importance

70 Verification Level
Functionality

60
50
Performance
Installability

40 0 10 20 30 40 50 60 70 80 90 100
30

Usability

Process
20 Low=1 High=100
10
0

MAINTAINABILITY
ROI
Quality Processes Support Level
Requirements Review No None/As-is No
Opportunity Costs Design Review No Documented Yes
Re-use (projects) As-available (on
Code Review No team) Yes
Re-use (releases)
As-available (not
Development Cost
Unit Test No on team) No
Maintenance Cost Function Test No Bug fixes only Yes
Information Value System Test No Full Support No
Efficiency
Chance of Failure
Effectiveness
Architecture
Low:<=2 Med:3-7 High:>7
Modular (Tasks) Yes
0 10 20 30 40 50 60 70 80 90 100 Modular (Function) Yes
Low=1 High=100 Data-Driven No ErrorFactor
Keyword Driven No UseFrequency
Model Based Yes 0 2 4 6 8 10

1/13/2005 Test Case Library No


Dashboard - Purpose
„ Mission
„ Test management
„ Find bugs
„ Certify
„ Coverage
Mission

Find Bugs
„ Regression 100

„ Setup 80

Regression
% Importance

Coverage

Test
Management
60
„ Process
40

Process
Certify

Setup
20

0
Dashboard - Purpose
„ Quality Characteristics
„ Functionality
„ Installability
„ Performance
„ Reliability
„ Usability Quality Characteristics

„ Process

Functionality
100

Performance
Installability
% Importance

80

Reliability

Usability
60

Process
40
20
0
Dashboard – Impact
„ Percent of Test Effort

% of Test Effort 30
Dashboard – Impact
„ Test phase
„ Planning
„ Management
„ Generation
„ Setup
„ Execution
Test Phase
„ Verification Test Planning No
Test Management No
Test Generation Yes
Test Setup No
Test Execution Yes
Test Verification Yes
Dashboard - Impact
„ Satisfaction level
„ Test impact
„ Verification completeness

Satis factio n Level

Tes t Imp act

Verificatio n Level

0 10 20 30 40 50 60 70 80 90 10 0

Low =1 High=100
Dashboard - Impact
„ Verification Type
„ None
„ Master based
„ Hard coded
„ Data driven
„ Heuristic

Verification Type
None No
Master-based No
Hard-coded No
Data-driven No
Heuristic Yes
Dashboard - Impact
„ Percent of Test Effort
„ Test phase
„ Satisfaction level / Test impact / Verification
level
„ Verification Type
Dashboard - ROI
„ Return on Investment
„ Development Cost
„ Maintenance Cost
„ Opportunity Cost
ROI
„ Re-use across projects
„ Re-use across Opportunity Costs

releases Re-use (projects)

Re-use (releases)

„ Information value Development Cost

Maintenance Cost
„ Efficiency Information Value

„ Effectiveness Efficiency

Effectiveness

0 10 20 30 40 50 60 70 80 90 100

Low =1 High=100
Dashboard - Maintainability
„ Quality processes
„ Requirements review
„ Design review
„ Code review
„ Unit test
„ Function test Quality Processes
Requirements Review No
„ System test
Design Review No

Code Review No

Unit Test No
Function Test No
System Test No
Dashboard - Maintainability
„ Architecture
„ Test case library
„ Modular tasks
„ Modular function
„ Data driven
„ Keyword driven
„ Model based Architecture
Modular (Tasks) Yes
Modular (Function) Yes
Data-Driven No
Keyword Driven No
Model Based Yes
Test Case Library No
Dashboard - Maintainability
„ Support level
„ None / As-is
„ Documented
„ As-available (not on
team)
„ As-available (on team)
Support Level
„ Bug fixes only None/As-is No
Documented Yes
„ Full support
As-available (on
team) Yes
As-available (not
on team) No
Bug fixes only Yes
Full Support No
Dashboard - Maintainability
„ Risk of having problems
with automation
„ Error Factors
„ Frequency of Use

Chance of Failure
Low :<=2 Med:3-7 High:>7

Erro rF a c to r
Us e F re que nc y
0 2 4 6 8 10
Dashboard – Maintainability (Error Factors)
3 New
3 Complex
3 Late add
3 Under extreme time pressure
1 Many previous defects
1 Changed or rewritten
1 Frequently adjusted
1 Used new tools or techniques for first the time
1 Transferred to another developer
1 Optimized more frequently than normal
1 Many interfaces
1 Inexperienced developers
1 Insufficient involvement of users
1 Insufficient quality processes during development
1 Sub-optimal communication
1 Insufficient testing
1 Large development team
Dashboard - Maintainability
„ Quality processes
„ Architecture
„ Support Level
„ Risk of automation failure
„ Chance of Error x Frequency of Use
Make the Dashboard Visible
„ Include it in the test plan
„ Include it in the final report
„ What other means do you have to provide
visibility?
„ Retrospectives
„ Periodic Status Meetings
„...
Dashboard - Demo
„ If you want me to email the Excel
spreadsheet file with the dashboard template
„ Leave me a card/note with your email address
– or
„ Send me an email
„ whitmill@us.ibm.com
Test Automation Dashboard

By Kelly Whitmill
IBM Printing Systems
Boulder, Colorado
whitmill@us.ibm.com
Automation Dashboard

Even though most everyone recognizes that automation is a key element in improving
test efficiency and effectiveness, many automation efforts unfortunately fall far short of
achieving the desired results. One tool for keeping progress visible is an Automation
Dashboard—a one-page report that tells the automation story clearly and simply with
charts and gauges. This report becomes a tool to improve your organization’s
understanding, communication, and use of good automation practices. At the same time it
helps keep the focus on costs, benefits, purposes, and related automation issues that are
often overlooked. The dashboard provides a quick measurement of the automation and
allows results to be compared to expectations and other test efforts. Measurement and
visibility alone promote improvement by increasing awareness of your automation goals.
Additionally, the simplicity of having a single, relatively easy-to-fill-in page makes it
achievable and easier to share with others.

The success of the automation dashboard is based on four principles.


1. Good models lead to good testing.
Many test experts rely on models (a set of patterns or guidelines) to quickly guide
them in choosing what to test and how to test. To improve automation we must
base more automation decisions on fundamentally sound models. The automation
dashboard brings these automation fundamentals to the forefront and does so
without time-consuming education and without tedious discussions.
2. Visibility encourages participation and performance.
There is never enough time to accomplish everything in test. Everyone must pick
their battles. People will naturally choose the ones that get noticed. The cliché
“out of sight, out of mind” holds true for automation. The dashboard gives
automation the visibility it needs to be noticed, chosen, and focused on.
3. Measurement drives improvement.
It has been shown many times over that what you measure is what improves. This
dashboard measures the use of proper fundamentals on your test automation. The
dashboard lets you use human nature to help drive the much needed
improvements in automation.
4. Simplicity makes it achievable.
By means of using a one-page, easy-to-generate dashboard you can a) subtly but
effectively educate all stakeholders on the fundamentals of automation, 2) clarify
the purpose, status, and effectiveness of automation and 3) begin to understand
the role and impact of automation.

Dashboard Contents
The automation dashboard is designed to be a one-page set of charts and gauges that
report on a single automation effort, usually a tool. (See appendix A to view a sample
dashboard.) You would need a separate dashboard for each tool. However, you could
construct a dashboard to provide a more general picture of many tools. A dashboard
addresses four main areas.
• Purpose

2
What is the overall purpose that the automation is supposed to accomplish? Do
you really understand the mission of the automation?
• Impact
How much effect does this really have on the test effort? What phases of testing
are impacted? How much does it really help the test effort?
• Return on Investment (ROI)
Is this automation worth the investment? How much is it costing? How valuable
is the information that it provides? Does it improve test efficiency? Does it
improve test effectiveness?
• Maintainability
How hard is it going to be to maintain this automation? Were good development
processes used to develop it? Is it built on a fundamentally sound architecture?
Is there a support structure for it?

A description of each area of the dashboard follows.

Purpose
Though it seems a little hard to comprehend, automation is often developed without a
good understanding of the mission or purpose of the automation. For example, is the
purpose to save time or find more bugs? Is it to comply with management mandates,
show conformance to industry standards, or satisfy a process? All automation
stakeholders should have a clear understanding of the mission priorities for the
automation.

Establishing the automation mission priorities during the initial planning phase and
making them visible throughout the planning, design, implementation and deployment of
the automation avoids a myriad of problems and helps ensure that decisions, perceptions,
and related work are properly focused.

For the automation dashboard, two views of the automation purpose are suggested.
1. Mission
List each mission and its percentage importance. The total of all percentages
should add up to 100. A list of possible missions could include (you may use a
different list):
• Find bugs
• Certify code meets a standard
• Provide code coverage
• Regression Test
• Make setup easier and faster.
• Test Management (e.g. defect management, test case tracking, etc.)
• Process (e.g. generate charts for weekly reports)
2. Quality Characteristics
This is most important for automation that is intended to test code. It states up
front what the automation is intended to accomplish. Specify the quality
characteristics for the automation and the percentage importance for each
characteristic. The sum of all characteristics’ importance should be 100.

3
• Functionality
• Installability
• Performance
• Reliability
• Usability
• Security
• Localization/Globalization
• Test Management (accommodate automation that is not for testing code)
• Process (Accommodate automation that is not for testing code)
To illustrate the usefulness of this indicator on the automation dashboard, assume
that there is a tool to automate the testing of a print subsystem in an operating
system. If the most important quality characteristic was “functionality” one
could reasonably expect the tool to focus on testing inputs, outputs, and functions
of the print subsystem. However, if the most important quality characteristic was
“reliability” one could reasonably expect the tool to focus on memory leaks,
corruptions, deadlocks and so forth. The reliability-focused tool may never
explicitly test an input, output, or specific function.

The purpose section of the dashboard can quickly convey the mission of the tool
and help all stakeholders understand its mission and priorities and will be
instrumental in focusing all related decisions and work accordingly.

Impact
What impact does this automation have on the project? How much effort will it require of
the test team? How labor intensive or how turnkey is the automation?
The dashboard uses 4 indicators or gauges to present the impact of this automation.
1. Percentage of test effort
What percentage of the total test effort is consumed by this automation? If 30%
of the test effort is taken by this automation then you would expect it to have at
least a commensurate impact on the effectiveness and/or efficiency of the test
effort. The higher the test effort percentage the more a stake holder must pay
attention to and understand the automation. The higher the test effort, the stiffer
the requirement to justify it. It is not intended that this be an exact number, but a
general number to convey the level of effort required for this automation.
2. Satisfaction/Verification/Impact Chart
The testers’ satisfaction level with the tool is a good indicator of its impact. The
lower the satisfaction level the more likely testers will try to avoid or ignore it and
the less potential it has for having a positive impact on the project.

The verification completeness refers to the comprehensiveness of the verification.


The lower the verification completeness the more tester resource that may be
required to be effective. The higher the verification completeness, the more likely
it is that the tool will be used to its full potential. All stakeholders need to
understand the verification level. If we ran 10,000 test cases on a project with
90% verification and found no problems that is quite a different situation than

4
running 10,000 test cases with 2% verification. Decision makers need to
understand the verification level.

The test impact is a measure of how important the test results are to the test effort.
How dependent are you on the information provided by automation? How much
feedback does the automation provide for test, development and management?
The overall assessment of the test impact should be at least commensurate with
the test effort required. An honest evaluation of this should expose situations
where the test results do not justify the effort. For example, consider a regression
test that grows and grows over time and becomes labor intensive but doesn’t find
bugs. Perhaps no one is quite sure what it covers thus rendering it not very useful
in risk-based decision making. Such a test should show up with a test effort
percentage that far exceeds the test impact percentage.
3. Test Phase
What phases of testing are impacted by this automation? Many times the word
automation is used and different people connote different meanings. This clarifies
what part of the testing is being automated and what phases of testing are being
impacted. If you look at the dashboards for all your automation you should get a
good picture of where your automation is weak and where it is strong.
The sample dashboard uses the following phases/activities:
• Test planning
• Test Management
• Test Generation
• Test Setup
• Test Execution
• Test Verification
Additionally, you can identify some degree of completeness of the automation.
For example, for the test verification phase you could choose the value “partial”
if some of the verification is automated and some is manual.
4. Verification Type
The type of verification being done by the automation does have an impact on the
project. If there is no automated verification then that puts the responsibility on to
the tester to complete that activity instead of the tool. Master-based verification,
and possibly hard coded verification, will require a lot of maintenance for
changes. Data–driven verification may make it more flexible to accomplish
automation. Heuristic verification usually implies more flexible and context-
sensitive verification but does not guarantee the results. Heuristic verification may
imply a sampling of more comprehensive verification is needed.

Verification type combined with level of verification gives decision makers a


better understanding of how to interpret the results of tests.

Return on Investment (ROI)


Hopefully, we do automation because we think the benefit we derive from it is worth
more than what it costs us to do it. There have been many presentations and papers which
try to quantify automation ROI. It is the perspective of this author that those papers and

5
presentations offer valuable insights into automation ROI but fall short of providing a
believable, reliable, quantitative ROI. Nonetheless, it is still important to keep the ROI
factors visible and to understand the ROI even if it is not easily quantifiable. ROI may be
similar to what was once said about obscenity. I may not be able to precisely define it,
but I recognize it when I see it.

The following factors should be considered when trying to understand ROI:


• Opportunity Costs:
What opportunities did you miss out on because you invested your time and
resources in this automation? What could you have accomplished if you invested
your time and resources elsewhere?
• How much is this automation re-used across projects? What is the likelihood for
re-use across projects in the future?
• How much is this automation re-used across releases in this project? What is the
likelihood for this sort of re-use in the future?
• How much does it cost to develop this automation?
• How much does it cost to maintain this automation?
• What is the value of the information I obtain from this automation?
What information does this automation provide? What will it be used for? How
important is it to the business?
• How much does this automation improve my efficiency?
How much faster will I able to accomplish my tasks and objectives?
• How much does this automation improve my effectiveness?
If the automation is to test code, will it find more bugs than alternative methods?
Will it provide better coverage? Will it do a better job of providing useful
information used for decision making than the alternative methods?

For the automation dashboard, consider each of these factors and rate them on a scale of
1 to 100 (low=1, high=100). Make the factors visible and be able to defend your rating.
The thought process of defending your rating forces you to think about essential elements
of automation that are often ignored. Making them visible via the dashboard lets all
interested parties get a quick view of the overall costs and benefits of the automation.

Maintainability
It is important to understand the maintainability of the automation. There are indicators
that provide valuable insight for maintainability. Those indicators include:
1. What foundation or architecture are the tests built on?
• Test Case Library: A library of individual test cases. Typically, this is a
maintenance intensive architecture. Any change to the externals of the
code being tested has a tendency to require updates to many test cases. If
there is not a good mechanism to identify the contents of each test case, it
can become an almost impossible task to keep the test cases up-to-date.

6
• Modular Test Tasks: Test cases are logically grouped according to tasks
and can be used as building blocks to build on each other. For example, to
test a calculator you could have a script to test that 1+5=6, and another for
999999 + 1 = 1,000,000. You could then create a test case comprised of
all the addition scripts. You could group the addition, subtraction,
multiplication, and division test cases into a test suite, and so forth. This
architecture is easier to use in locating test cases and understanding what
is being tested but can be very problematic and maintenance intensive in
accommodating changes to the product under test. You may still have to
update many test cases to handle a change to the externals of the product.
• Modular Function: The handling of each automation function is
encapsulated into modules. For example, you may develop one module
that provides the operands, another that provides the operator and another
that verifies the result. Each test case uses these modules to implement a
test. If there is a change in how operands are input then only that module
has to change. The test cases are isolated from such changes and are less
maintenance intensive.
• Data Driven: The tests separate the “what” to test from the “how” to test.
For example, the automation code provides the logic for inputting
operands and operators and doing the verification, but reads the actual data
from an external source such as input parameters, files, tables, etc. This
minimizes changes needed for new test cases to updating data files rather
than updating the automation code.
• Keyword Driven: Test cases are written in high level keywords. The
automation interprets the keywords, converts them to actual tests, and
executes them. This is intended to make it easier for testers to write test
cases without getting into the details of coding. This is useful in
environments where the testers have a lot of domain expertise but are not
strong in software development expertise.
• Model Based: This is context-sensitive automation using a simple
behavioral model of the application. The automation chooses the next
action and supporting data based on the current state. Updating the models
behavior automatically updates the test cases and is intended to be less
maintenance intensive, as long as the model does not get too complex.

2. Were adequate quality processes used to develop the automation?


• Automation is no different than any other development effort? Quality in
tends to lead to quality out. Garbage in leads to garbage out. You can get a
lot of insight about how hard this code will be to maintain by knowing
which quality processes were used to develop the automation. Determine
which of the following quality processes were meaningfully applied
during development:
1. Requirements Review
2. Design Review
3. Code Review
4. Unit Test

7
5. Function Test
6. System Test

3. What level of support is available?


It doesn’t matter how well the code was built if no one is available to update the
automation as needed. . Is the support level adequate? What is the level of
support that is available during your entire test cycle?
• None/As-is: The tool is available on an as-is basis only.
• Documented: What documentation is available? None? User?
Programmer?
• As-Available (not on team) A capable person, who is not part of your test
team, is willing to support the tool as time permits. This probably means
that you are low priority and any support may not be timely.
• As-Available (on team): A capable person on the team is willing to
support the tool as time permits. It is not part of their job responsibilities
but the fact that they are on the team indicates that they have a vested
interest in the tool. If team priorities dictated the need, this person’s job
priorities could be re-arranged to accommodate significant problems. In
general, you cannot count on support being timely.
• Bug-Fixes Only: There is a capable person whose job responsibility
includes fixing bugs with the tool. You probably won’t be able to get any
enhancements to accommodate changes to the application under test.
• Full Support: There is a capable person/team that can provide bug fixes
and , if justified, new requirements for the tool.

4. What is the risk of encountering errors with this tool?


It is useful to gain some understanding of how likely you are to have a failure
with the automation.
For our purposes, we are defining risk as the chance that a failure will occur.
The chance of failure is composed of two elements, the chance of error combined
with the frequency of use.
Chance of failure = chance of error x frequency of use
The frequency of use is normally fairly easy to determine. The chance of error is
an area where we can lend objectivity to what is otherwise a very subjective
practice. There are known factors that tend to influence the chance of errors in
code. By understanding how many of those factors come to play for the code in
question we can more objectively determine the chance of error. Not every factor
will influence the chance of error equally. For each factor, we assign it a value.
The higher the value the more likely it is to cause an error. The following table
shows a list of factors that cause errors. Your experience may show that there are
other factors that need to be added to the table for your project. The values
assigned are not precise. They just show that some factors are much more likely
than others to cause problems. You may want to adjust the values assigned based
on the history of your project and development teams/environments.

8
Chance of Error
3 New or changed complex function
3 Completely new function
3 Late add function
3 Functions that were realized under extreme
time pressure
3 Functions in which many defects were found
earlier (e.g. Previous releases or during earlier
reviews)
1 Changed or rewritten function
1 (Especially frequently) adjusted functions
1 Functions for which certain tools or
techniques were employed for the first time
1 Functions which were transferred from one
developer to another
1 Functions which had to be optimized more
frequently than on average
1 Functions not tested within the last <n>
releases
1 Functions with many interfaces
1 Inexperienced developers
1 Insufficient involvement of users
1 Insufficient quality assurance during
development
1 Insufficient quality of low-level tests
1 New development tools and development
environment
1 Large development teams
1 Development team with sub-optimal
communication (e.g., Owing to geographical
spread or personal cause)
Determine the chance of error by summing the values of each of the
factors that apply for the specified component.

Periodically revisit the error factors and update them accordingly. For example, if
the original development was under extreme time pressure but you have used this
tool for a couple of releases since then, those items (new function and extreme
time pressure) are probably no longer a factor and can be removed.

9
Make the Dashboard Visible
All automation stakeholders must see the dashboard to be able to benefit from it. You
need to determine appropriate times and means to present the dashboard. Some
considerations include:
• Include as a part of the function test plan.
• Include in final test report
• Include in retrospectives
• Include in periodic status meetings

Implementation Details
To make your own automation dashboard do the following:
1. Open ADashboard.xls with Microsoft Excel
2. Copy the Dashboard Template Sheet to another sheet.
I accomplish this by doing the following:
1. Right click on the “Dashboard Template” tab
2. Click “Move or Copy”
3. Click the “Create a copy” box
4. In the move or copy dialog box click the location where you want the
sheet inserted
5. Click OK
3. Rename the Sheet name to something meaningful
• Double click the new tab
• Type the new sheet name
4. Edit the following gauges in-place on the sheet using the list box.
• Impact – Verification Type
• Impact – Test Phase
• Maintainability – Quality Processes
• Maintainability – Architecture
• Maintainability – Support Level
Each input field has a list box. If you click on the input field (which in most cases
is a field that says either “Yes” or “No”) a list box will appear and you can select
the value you want.
5. Edit the “% of test Effort” gauge. Just edit the percent value and enter the value
that you want.
6. Enter the chart information by filling in fields that are found in rows below the
dashboard. Look in rows 50-110. For each chart there is a cell in all CAPS that
indicates which chart the data is for. You can edit the values for each data item as
appropriate for your automation.
• Note: Do not directly edit the ErrorFactor field for the RISK OF
FAILURE chart. This cell is a formula that adds up all the error factors
provided in the ERROR FACTOR LIST.
• Notes on the ERROR FACTOR LIST.
1. There are three columns.
i. The name of the error factor

10
ii. An on/off field to determine if the factor is
applicable or not. (enter 1 for on and blank
for off ) This is the field adjacent to the
“error factor name” field. It is the only field
in the error factor list that you should edit.
iii. The weight field that indicates that some
factors are more important than others.

7. To print the dashboard first select the dashboard . Then with the print dialog
choose to print only that portion of the spreadsheet that is selected. For example
in the Print What section of the printer dialog box select to print “Selection”. It
will work just fine to print the entire spreadsheet but that will also print the data
rows below the dashboard.
8. The spreadsheet is provided as an example on an as- is basis. The above
instructions are just to help those who may not be familiar with Excel
spreadsheets. Feel free to customize the spreadsheet or how you work with the
spreadsheet to suit your needs.

Conclusion
Test automation fundamentals needs to become more of a normal part of testing. Just
having the automation experts be aware is not sufficient. The Test Automation
Dashboard makes automation essentials visible to all stakeholders of the automation. The
Dashboard simplifies the process of communicating what is important about automation
and communicating the status of the automation. The sample dashboard provides focus
on four areas.
1. Purpose
This section helps to ensure that it is clearly understood what the automation is
intended to accomplish and to focus all related decisions towards that purpose.
2. Impact
Increase the understanding of how important this automation is to the overall test
effort. What is the impact in terms of time and resources, test phases impacted,
tester satisfaction, and how much information we can expect it to provide.
3. Return on Investment
Increase the understanding of the costs and benefits of the automation.
4. Maintainability
This makes visible an objective view on the maintainability of the automation. Is
it well designed and developed and built on an adequate architecture? Is there an
adequate support structure in place? What is the likelihood of encountering
failures?

11
Appendix A: Sample Dashboard

Purpose Template IMPACT


Mission % of Test Effort 5
100 Test Phase

Regression
80 Verification Type Test Planning No
% Importance

Coverage None No Test Management No


60
Master-based No Test Generation No

Management
40 Hard-coded Yes Test Setup No
Find Bugs

Test

Process
20 Data-driven No Test Execution All
Certify

Setup
Heuristic Yes Test Verification All
0

Quality Characteristics
Functionality

Satisfaction Level

100 Test Impact


80
Importance

Verification
60 Completeness
Performance
Installability

40 0 10 20 30 40 50 60 70 80 90 100
Reliability

Usability

Process

20 Low=1 High=100
0

MAINTAINABILITY
ROI
Quality Processes Support Level
Requirements Review Yes None/As-is No
Opportunity Costs Documented
Design Review Yes Yes
Re-use (projects) As-available (on
Re-use (releases) Code Review No team) Yes
As-available (not
Development Cost
Unit Test Yes on team) No
Maintenance Cost Bug fixes only
Function Test No No
Information Value System Test No Full Support No
Efficiency
Chance of Failure
Effectiveness Architecture
Low:<=2 Med:3-7 High:>7
Modular (Tasks) Yes
0 10 20 30 40 50 60 70 80 90 100
Modular (Function) Yes
Low=1 High=100 Data-Driven No ErrorFactor
Keyword Driven No UseFrequency
Model Based No 0 2 4 6 8 10

2/7/2005 Planning Stage Test Case Library No

12
Glossary
(A definition of terms as they apply in the context of the automation dashboard)

As-available on team (Support): Someone on the team is capable and willing to provide
support but it is not part of their job responsibilities.
As-available not on team (Support): Someone who is not on the team is capable and
willing to provide support but it is not part of their job responsibilities.
Bug fixes only (Support): It is someone’s job responsibility to fix bugs with the
automation.
Certify: To formally verify the code meets a specified set of criteria.
Code Review: To examine the code for errors.
Coverage: Testing designed to exercise the various parts of the code. For example
statement coverage may try to execute every statement, branch coverage would try to test
every branch and so forth.
Data-driven: Separate the data for the test case from the code/logic to process the data.
Design Review: To examine the design for errors.
Development Cost: The total expenses for the planning, design, and development of the
automation.
Documented (Support): Indicates what documentation is available to help understand
how to use and/or maintain the automation.
Effectiveness: How well does it accomplish its goal? For example, effectiveness may
refer to how good the test is at finding bugs as opposed to how fast it runs.
Efficiency: Time and resources required to accomplish the task/activity. Improved
efficiency should reduce the time and resources required.
Error Factor: A measure of the condition(s) that tend to make the automation error
prone.
Full Support (Support): It is someone’s job responsibility to both fix bugs and add
required updates to the automation.
Function Test: Testing to see that the code does what the functional specification (or
other agreed upon source) says the code will do.
Hard-coded Verification: The data or behavior is written directly into the test case
and/or program.
Heuristic Verification: Rather than doing comprehensive verification, just examine
elements that are good indicators of success or failure. Heuristic verification is often
context sensitive. For example, a query may examine the results for key values rather
than verify every value returned was correct.
Information Value: A measure the importance of the information provided by the
automation. What decisions will be based on it? What feed back will it provide?
Keyword Driven (Architecture): The test case is defined by a set of keywords.
Normally, they are meaningful to domain experts. A separate program reads and
interprets the keywords to generate, execute and/or verify the test case(s).
Maintenance Cost: All expenses associated with the automation following its initial
deployment.
Master Based Verification: Capture the results of the test case and ensure their
correctness and save them as the master. Results of subsequent executions of the test case
are compared to the master to determine correctness.

13
Model Based (Architecture): Rather than scripting each action of the test case, actions
are decided based on context in a behavioral model. Typically, but not always, the
expected results can be predicted and then verified after the action is executed.
Modular Function (Architecture): Encapsulate the processing of a function into a
module so that when the function changes you don’t have to change every test case but
just the module for that function.
Modular Task (Architecture): Organize test cases by the task they perform. Example:
for a calculator you could group all the addition test cases together, the multiplication test
cases together and so on.
Opportunity Cost: The best alternative that is forgone by choosing this automation.
What is the value of what you could have done with the time and resources you invested
into this automation?
Process: This is a catch-all category that includes any purpose that is not covered by the
other purposes. Examples: If you created a clock to time exploratory testing sessions the
purpose may be to simplify the “process” of testing.
Regression: Verification that a change made to an application does not cause errors in
code paths that previously worked correctly.
Requirements Review: Examine the requirements for errors.
Re-use: Any use of the automation other than the initial deployment.
Satisfaction Level: How well do the stakeholders like the automation?
Setup: Activities and processes for setting up a test to run. One example might be
configuring a machine with the proper operating system and software.
System Test: Testing for system related problems such as resource management,
deadlocks, hangs, end-to-end testing, etc.
Test Case Library (Architecture): A collection of individual test cases.
Test Execution (Phase): Run the test case(s).
Test Generation (Phase): Create the test case(s).
Test Impact: How much does this influence decisions? How much time or resources
does it save? How much does it help?
Test Management (Phase): Those processes and activities related to managing the
planning, preparation, execution of tests. This would include tools such as defect
tracking, requirements tracking, report generation, project schedulers and so forth.
Test Setup (Phase): Prepare the environment to be able to run the test case(s).
Test Verification (Phase): Examine the results of the tests for errors.
Unit Test: Test a single unit in isolation from the rest of the application. This is often a
white-box test.
Use Frequency: A measure of how often the tool gets used.
Verification Completeness: Percent of the verification that could be done that is
automated.

14

Das könnte Ihnen auch gefallen