Sie sind auf Seite 1von 25

Exploratory Testing A Practical Case Study

Sudhir Patnaik
Associate Director, Product Testing & Automation, Tech Documentation
Accelrys Software Solutions Pvt. Ltd.
12
th
Floor, Discoverer Block, ITPL, Bangalore 560066
e-mail: spatnaik@accelrys.com
(www.accelrys.com)
Agenda
What is Exploratory Testing?
Exploratory Testing at Accelrys
Some statistics
Session-based Test Management (SBTM)
Implementing SBTM
SBTM and Metrics
Lessons Learnt
Q&A
Exploratory Testing
Exploratory testing is simultaneous learning, test design, and
test execution
Or
Unscripted, unrehearsed testing
Exploratory Testing #1
Exploratory testing has been defined as test design and test execution at the same time by
James Bach.
It differs from scripted testing which uses predefined test cases and procedures.
There are different types of exploratory testing including:
Pure: product is explored at will,
Chartered: focus areas identified for exploration,
Improvisational: scripted tests are extended into new areas of the test phase space.
Test case creation: while writing tests, the product is tested.
We currently use all of these methodologies within the Accelrys QC organization but not
consciously, not in a predetermined way and not consistently across all sites (Cambridge,
Bangalore and San Diego R&D).
Analysis
It should be noted that defect counts are not the only measure of QC
productivity.
In some cases low defect counts show that the product is indeed of high quality -
R&D are finding significantly more and better defects that QC.
Why Exploratory Testing
Analysis
It should be noted that defect counts are not the only measure of QC
productivity.
In some cases low defect counts show that the product is indeed of high quality -
R&D are finding significantly more and better defects that QC.
Metrics
Exploratory Testing in Accelrys
Some general statistics
Automated
Regression
2%
Code Peer
Review
16%
Coding
3%
Customer
Feedback
7%
Exploratory
Testing
29%
Manual
Regression
7%
Other
36%
29% of all bugs reported since 1
st
April 2005 were
found during Exploratory Testing
As of 24t h Oct ober
Total bugs reported since 1st April 2005: 11180
Found Duri ng
Automated Regression 140
Code Peer Review 1236
Coding 250
Customer Feedback 493
Expl orat ory Test i ng 2276
Funct i onal Test i ng 1085
General Review 189
Integration Testing 94
Manual Regression 480
Ot her 4342
Support/Operational 70
Unit Testing 52
Validation Testing 102
Exploratory Testing in Accelrys
Some general statistics
FYTD April May J une J uly Aug Sept Oct
All Bugs reported by QC 3657 690 338 258 289 475 826 942
Exploratory Bugs reported by QC 1389 320 105 149 158 256 401 460
Percentage Exploratory bugs reported 37.98 46.38 31.07 57.75 54.67 53.89 48.55 48.83
A consistently significant proportion of all
bugs reported by QC are found during
Exploratory Testing.
0
200
400
600
800
1000
April May J une J uly Aug Sept Oct
2005
B
u
g
s
All Bugs reported by QC
Exploratory Bugs reported by QC
Findings
1. A large number of bugs during exploratory testing.
2. From the statistical data, Exploratory testing has produced over 3 times more bugs than
automated and manual regression testing combined.
3. Regression testing for example, whether automated or manual, is aimed at
ensuring a product isnt broken from build to build i.e. running the same set
of tests to ensure integrity of functionality.
Problems in Exploratory Testing
Problems with current use of exploratory testing are
Not planned or structured, its completely adhoc in the majority of cases.
Exploratory testing being undertaken at inappropriate stages, with
inappropriate skill sets
Test Coverage is not managed, recorded or analysed
Effectiveness of testing not assessed in terms of time and bug counts
We therefore investigated methods of managing exploratory testing that
address so or all of these issues. One such methodology is Session Based
Exploratory Testing.
No mechanism for reporting exploratory testing progress to stakeholders
Others?
We therefore need an Exploratory Testing approach that ensures all types of
exploratory testing is planned, structured, efficient, has good coverage and is
tracked.
Session-based Test Management
Characteristics
Gives the testing some structure, although that does not mean it is scripted.
Work divided into chunks, or Sessions, that can be tracked and measured.
Sessions are approximately 1-3hrs.
Each session has a charter which describes its scope.
Each session has a report listing what they did.
Session-based Test Management can be thought of as structured exploratory
testing. It means we have a set of expectations for what kind of work will be done
and how it will be reported. The work is done is sessions. At the end of a session
the tester reports progress.
Session-based Test Management
Test Charter
A charter may suggest what should be tested, how it should be
tested, and what problems to look for.
A charter is not meant to be a detailed plan.
General charters may be necessary at first: Analyze background job
functionality
Specific charters provide better focus, but take more effort to
design:
Test job submission functionality. Focus on stress and flow
techniques, and make sure to submit multiple jobs. Were concerned
about resource leaks or anything else that might degrade
performance over time.
A Session constitutes an uninterrupted block of reviewable, chartered test
effort.
By chartered we mean each session is associated with a mission what we
are testing and what we are looking for.
By reviewable we mean a report is produced that describes what happened
during this session. (Note, for some projects this may be unnecessary,
recording the fact the session has been completed may be sufficient.)
Over time the reports, or session completion information, provide metrics with
which we can track progress.
Session-based Test Management
Anatomy of a Test Session
Sessions can be divided up into three tasks:
Session setup
Test design and execution, and
bug investigation and reporting.
If the time taken on these tasks is recorded, it can be used to analyse
the effectiveness of exploratory testing.
Session-based Test Management
Anatomy of a Test Session
Session-based Test Management
Possible Session Metrics
Number of sessions completed
Number of problems found
Function areas covered
Percentage of areas / Functions still to be completed
Percentage of session time spent setting up for testing
Percentage of session time spent investigating problems
Percentage of session time spent testing
The metrics produced very much depends on the administrative overhead
applied to exploratory testing. The list above starts of with straightforward
progress metrics down to relatively complex metrics used for assessing
exploratory testing cost effectiveness.
Implementing Session-Based Test Management at Accelrys
A full analysis of the product under test would be done, dividing it into
appropriate sessions or tasks. This would be done by the QC lead, who has
the prerequisite level of product knowledge and experience.
Depending on the complexity and size of the project, these sessions could
then be distributed out to test engineers, or simply worked through be the
local test engineer. (Example)
A Test session schedule could be created, ensuring progress is on track for a
designated completion date (pre GM candidate).
Implementing Session-Based Test Management at Accelrys
Sessions could be graded on a scale of complexity or domain knowledge
required, thus allowing differing skill sets of test engineers to contribute to
the exploratory testing effort.
Sessions could be scheduled depending on the degree of change in a
functional area. For example, QC could double up on certain sessions that
relate to new modules in MS Modeling.
Again, guidelines could be used to assist test engineers identify the types of
bugs to look out for. (See example)
Implementation planning phase
Product-A Test Session Tracking Product-A Exploratory Testing Plan
Microsoft Word
Document
Microsoft Excel
Worksheet
The sessions for MS Modeling 4.0 are based primarily around Modules and Tasks. Sessions in
bold signify parallel runs.

Session
#
Session Title
Domain
Expertise
required?
Duration Notes
Ex_01 Amorphous Cell Contruction 90
Ex_02 Amorphous Cell Minimizer 90 Requires an amorphous cell as input
Ex_03 Amorphous Cell Dynamics 90 Requires an amorphous cell as input
Ex_04 Amorphous Cell Protocols Y 90
Requires an amorphous cell as input, Confined shear protocol requires preparation of
layers
Ex_05 Amorphous Cell Analysis 90
Requires amorphous cell results as input.
See Help text for additional input requirements

Ex_06 Blends Mixing Calculation & Analysis
Ex_07 Blends Binding Energies Calculation & Analysis
Ex_08 Blends Co-ordination Numbers Calculation & Analysis

Ex_09 CASTEP Energy Calculation & Analysis
Ex_10 CASTEP Geometry Optimization Calculation & Analysis
Ex_11 CASTEP Dynamics Calculation & Analysis
Ex_12 CASTEP Elastic Constants Calculation & Analysis
Ex_13 CASTEP TS Search Calculation & Analysis Y Requires preparation of input: TOOLS Reaction preview
Ex_14 CASTEP Properties & Analysis Y
Properties can be run as part of a Task =Properties calculation or they may be added to
other calculation Tasks

Ex_15 DMol3 Energy Calculation & Analysis
Ex_16 DMol3 Geometry Optimization Calculation & Analysis
Ex_17 DMol3 Dynamics Calculation & Analysis
Ex_18 DMol3 - TS Search Calculation & Analysis Y Requires preparation of input: TOOLS Reaction preview
Ex_19 DMol3 - TS Confirmation Calculation & Analysis Y Requires preparation of input: TOOLS Reaction preview
Ex_20 DMol3 Analysis Y Requires results of DMol3 properties calculations
Session-Based Test Session Tracking sheet
Date Executed
Test Plan
ID
Test Session Name Build ID
Client
Platform
ID
Client Platfom
Details
Server
Platform
ID
Server Platfom Details Comments
Bugs
Entered
Executed by
Fri 21-Oct-05 Ex_37 Forcite Analysis OKQA-CAM-051021-0 WXPP1 Windows XP Pro SP1
Analysis of trajectories from
different modules: 2 hours
3 Farah Huque
Fri 21-Oct-05 Ex_56 VAMP - Analysis OKQA-CAM-051021-0 WXPP1 Windows XP Pro SP1 WXPP1
Windows XP Pro SP1
Professional
UV-Vis: 90 min 2 Farah Huque
Fri 21-Oct-05 Ex_17
DMol3 Dynamics Calculation &
Analysis
OKQA-CAM-051021-0 WXPP1 Windows XP Pro SP1 WXPP1
Windows XP Pro SP1
Professional
Runtime error 05269mmeq01
still occurs
05095f2hq01 observed
Farah Huque
Mon 10-Oct-05 Ex_67 Analog Builder OKQA-CAM-051005-0 WXPP2 Windows XP Pro SP2 WXPP2
Windows XP Pro SP2
Professional
90 minutes of Testing 3 RaghavanR
Fri 21-Oct-05 Ex_67 Analog Builder OKQA-CAM-051020-0 WXPP2 Windows XP Pro SP2 WXPP2
Windows XP Pro SP2
Professional
2 hrs of Testing + time
taken to track down bugs
2 Pandu Vikram
Fri 21-Oct-05 Ex_44 QSAR Models OKQA-CAM-051020-0 WXPP2 Windows XP Pro SP2 WXPP2
Windows XP Pro SP2
Professional
2 hrs of Testing + time
taken to track down bugs
1 Pandu Vikram
Fri 21-Oct-05 Ex_73 Visualizer Viewer Toolbars OKQA-CAM-051020-0 WXPP2 Windows XP Pro SP2 WXPP2
Windows XP Pro SP2
Professional
2 hrs of Testing + time
taken to track down bugs
1 Pandu Vikram
Sat22-Oct-05 Ex_72 Visualizer Explorers OKQA-CAM-051020-0 WXPP2 Windows XP Pro SP2 NA Not Applicable
2 hrs of Testing + time
taken to track down bugs
0 Pandu Vikram
Fri 21-Oct-05 Ex_82
Visualizer File, Edit and Window
menus
OKQA-CAM-051020-0 WXPP2 Windows XP Pro SP2 NA Not Applicable
2 hours of testing+time
taken to track down the
bugs
6 RaghavanR
Fri 21-Oct-05 Ex_50 Reflex Powder QPA OKQA-CAM-051020-0 WXPP2 Windows XP Pro SP2 WXPP2
Windows XP Pro SP2
Professional
2 hours of Testing + Time
taken to track down the
bugs
3 RaghavanR
Fri 21-Oct-05 Ex_73 Visualizer Viewer Toolbars OKQA-CAM-051020-0 WXPP2 Windows XP Pro SP2 NA Not Applicable
2 hours of Testing + Time
taken to track down the
bugs
8 RaghavanR
Mon 24-Oct-05 Ex_50 Reflex Powder QPA OKQA-CAM-051020-0 WXPP2 Windows XP Pro SP2 WXPP2
Windows XP Pro SP2
Professional
2 hours of Testing + Time
taken to track down the
bugs
3 RaghavanR
Fri 21-Oct-05 EX_54
VAMP Geometry Optimization &
Analysis
OKQA-CAM-051020-0 WXPP2 Windows XP Pro SP2 NA Not Applicable
2 hours of Testing + Time
taken to track down the
bugs
1 Harish Limba
Fri 21-Oct-05 EX_79
Visualizer Tools File transfer,
Server console, Options
OKQA-CAM-051020-0 WXPP2 Windows XP Pro SP2 NA Not Applicable
2 hours of Testing + Time
taken to track down the
bugs
0 Harish Limba
Fri 21-Oct-05 EX_83 Licensing OKQA-CAM-051020-0 WXPP2 Windows XP Pro SP2 NA Not Applicable
2 hours of Testing + Time
taken to track down the
bugs
2 Harish Limba
Fri 21-Oct-05 EX_84 Installation OKQA-CAM-051020-0 WXPP2 Windows XP Pro SP2 NA Not Applicable
2 hours of Testing + Time
taken to track down the
bugs
1 Harish Limba
Metrics
FYTD April May J une J uly Aug Sept Oct
Hrs Expl oratory Testi ng 200 9 0 16 18 81 76 no data
All Bugs reported by QC 186 9 5 21 40 43 68 101
Expl.bugs by QC 126 7 1 6 32 33 47 60
Non-Expl. bugs by QC 60 2 4 15 8 10 21 41
Percentage Exploratory bugs reported
67.74 77.78 20.00 28.57 80.00 76.74 69.12 59.41
Lessons Learnt
1. It is critical to the success of session based testing that the sessions are created
appropriately.
2. The Session based testing phase must be scheduled at the right time in the project
lifecycle. Too early and test engineers are still writing and executing NFE test cases, too
late and it risks interfering with the release testing phase.
3. Sessions must be limited to 1-2hrs otherwise engineers are subject to distractions
4. Sessions could be graded on a scale of complexity or domain knowledge required, thus
allowing differing skill sets of test engineers to contribute to the exploratory testing
effort.
5. For test sessions covering complex functionality it is important to create test guidelines
for test engineers less familiar with the product
6. A review is crucial in assessing the effectiveness of the testing, in terms of bugs
produced, coverage achieved, timeliness of sessions, etc.
References
1. Satisfice, Inc. http://www.satisfice.com/sbtm
2. Session-Based Test Management, Jonathan Bach, published in Software Testing and Quality
Engineering magazine, 11/00
3. The Malpractice of Exploratory Testing. By Johannes Jahnke,
JAHJY001@STUDENTS.UNISA.EDU.AU
4. How to Measure Ad Hoc Testing, Jonathan Bach, Satisfice, Inc., http://www.satisfice.com
5. Getting the Most Out of Exploratory Testing, James Bach, Satisfice, Inc.,
6. Exploratory Testing Explained, v.1.3 4/16/03, James Bach, james@satisfice.com,copyright
2002-2003, James Bach
7. Adventures on Session-Based Testing, James Lyndsay, http://www.workroom-
productions.com
Q&A
Thank You!
spatnaik@accelrys.com
Comparison of Scripted and Exploratory Testing
Exploratory Testing
Factor Scripted Testing Exploratory Testing
Timeframes &
Deadlines
Significant lead-in time required. Little of no lead-in time required.
Domain
Knowledge
Lack of domain knowledge can be overcome during test
design by analysis of documentation and interviewing subject
matter experts for clarification.
Cannot proceed where domain knowledge is insufficient. Training can
be used to mitigate this, but this introduces overhead
System
Complexity
Allows for careful design of test scripts to account for test
dependencies for complex end-to-end tests of the system.
No facility for managing test dependencies for end-to-end testing.
Relies on the skill set of testers to manage these dependencies on the
fly
Documentation
Level
Requires good documentation to be supplied. No documentation is required (assumes good domain knowledge)
Skills Required Requires good test analysis skills during test design phase.
However less skilled resourced can be used for test
execution.
Requires testers with the necessary attributes to be good exploratory
testers. Good test analysts or those who can execute formal tests do
not necessarily make good exploratory testers.
Efficiency Requires significant investment in preparing test scripts and
other documentation required for execution.
No investment in preparation. Also testers can interact with the
application more during a given time period without the overheadof
having to read and complete test scripts.
Coverage All tests are formally recorded with test scripts completed and
signed off during execution. Test scripts can be traced back
to the original requirements and SRS to demonstrate
coverage,
No clear and measurable test coverage.
Verification Formally verifies system against specifications. System is compared only to the testers expectations and
understanding how the application should work
Acceptable Risk
Levels
High-risk areas can be covered explicitly and in greater
depth.
Risk of a particular function, business function or scientific algorithm
not being exercised during testing. No guarantee of coverage
Reproducibility Can easily be reproduced. Only defects can be reproduced.

Das könnte Ihnen auch gefallen