Sie sind auf Seite 1von 50

Testing Circus

Volume 7 - Edition 4 - April 2016

Interview with
Magazine for Software Testers

Dan Ashby

www.TestingCircus.com

Coming soon

www.testing.news

serious about software quality


At Doran Jones, our mission is to help technology organizations
improve their ability to deliver software and add business
value. We believe the best way to do this is through hands-on
delivery, working alongside our clients. Let us show you life at
the intersection of talent and opportunity.!
Software Development
Software Testing

Training and Coaching


Recruitment

Urban Onshore Outsourcing


as seen in:!

www.doranjones.com!

Testing Circus
Volume 7 - Edition 4 - April 2016

Table of Contents
Topic

Author

Join the Anti-Test Automation Brigade

Alan Richardson

Quadrants of Context

Jyothi Rangaiah

10

To Document, or not to Document

Leanne Howard

15

Communicating Context

Erik Davis

17

Interview with Dan Ashby

Srinivas Kadiyala

19

Preparation over Planning: An Agile Test Strategy

Adam Knight

27

#TestRelatedAccounts to follow @Twitter

Testing Circus Team

34

Automating Radio Buttons using Selenium Webdriver

Mohit Verma

38

Is Performance Testing just a Testing?

Alexander Podelko

43

Testing Circus Team


Founder & Editor Ajoy Kumar Singha
Team Srinivas Kadiyala
Pankaj Sharma
Jaijeet Pandey
Vikas Kumar
Pawan Kumar
Editorial Enquiries: team@testingcircus.com
Article Submision: article@testingcircus.com

Page #

Chaturbhuj Niwas, 1st Floor,


Sector 17C, Shukrali,
Gurgaon - 122001
India.
Copyright 2010-2016. ALL RIGHTS RESERVED. Any
unauthorized reprint or use of articles from this magazine
is prohibited. No part of this magazine may be reproduced
or transmitted in any form or by any means, electronic or
mechanical, including photocopying, recording, or by any
information storage and retrieval system without express
written permission from the author / publisher.
Edition Number :

67 (since September 2010)

Testing Circus India


*On the Cover Page - Dan Ashby

www.TestingCircus.com

April 2016

- 04 -

From the Keyboard of Editor


Recently came across an organization where it is apparently a culture to stay late at
the office every evening. The employees there work more than 10 hours, sometimes
11 hours most of the time. That is the norm.
Many scientific studies have revealed that working more than 40 hours a week is less
productive, harmful for your health and can reduce your life span as well.
Organizations who believe forcing employees to remain in office for more than 8
hours is actually losing productive performance. Each additional hour beyond 8
hours adds more stress to the employees. Specially information technology based
organizations where programmers and testers need to utilize their brain, are losing
the quality work they need from their employees. A stressed brain introduces more
broken code, more bugs in the code - leading to more rework.
Today, the norms of many organizations are such that Sheryl Sandberg, the chief
operating officer of Facebook, for leaving the office every day at 5:30 p.m. is a news. I am not saying we can always leave
the office at 5 oclock. There are days when our work requires more than 8 hours of our attention. But those should be
exceptional days, not routine. The technology invasion into our lives is already making our work life longer with text
messages, mobile email access, whatsapp groups. A study conducted by the American Psychological Association found
that more than 50% of us check work email before and after work hours, throughout the weekend, and even when were
sick. Even worse, 44% of us check work email while on vacation.
So leave office on time. Spend time on your hobby. Go for vacations. Dont pick up phones while on vacations. If you have
to work more than 40-45 hours a week, then your project or the organization is in a mess. Talk to your boss. Leave that
job. When you are on your deathbed, you wont remember how many lines of code you have written, how many bugs
you found, how many client appreciations you got.
In this edition we have an interview with Dan Ashby and also some valuable articles for testers from Erik Davis, Alan
Richardson, Jyothi, Adam Knight, Leanne Howard, Mohit, and Alexander Podelko. More news next edition.
Happy testing!

- Ajoy Kumar Singha


@TestingCircus // @AjoySingha

Feedback please!

team@testingcircus.com
www.TestingCircus.com

April 2016

- 05 -

Join the
Anti-Test Automation

Brigade
- Alan Richardson
In this article Im going to show you a very quick way to
fix all your "Test Automation" problems. This will be
especially useful to you if you have ever asked either of
the following questions:
"What test automation should we do?" and "How much
test automation should we have?".
Both of these are questions that I am asked at
conferences, when Im onsite consulting with clients
and via email. They seem to be common questions in the
testing industry. My answer to both questions has
changed over the years.
I used to find these questions hard to answer because I
wanted my answers to fit the process and approach of
the person asking the question. Inevitably I would have
to ask them follow on questions like: "Are you using an
Agile process?", "What technology are you working
with?", "Do you have programming skills?".
But I dont need to ask those clarifying questions any
more. My answers are simple. And, Ive reached the
point now, where my answers are the same:
"None. You cant."
Over the past few years my attitude to "Test
Automation" has changed. It has now changed such that
I am vehemently anti-"Test Automation".
Immediately, some people reading this will be nodding
their heads in agreement. At this point it would be
presumptuous for you to do so because I have not
explained why I now take this stance, or what I meant.
Those of you nodding may have interpreted my
statement as:
we can not automate testing
testers should not program

Or possibly some other interpretation. In fact I


deliberately led you to interpret it in some other way,
because I want to make the point that one reason I am
vehemently anti-"Test Automation" arises from the
ambiguity present in the phrase.
"Test Automation" doesnt mean anything. Or rather, it
means too much. It could mean many things. And as a
result, doesnt really mean anything.
"But Alan youre so involved in Test Automation!",
some of you might be saying if you are aware of my
work, because I have books and online training courses
relating to: Java, Selenium WebDriver, Tools. I also have
blogs and web sites dedicated to those topics. I will
hastily explain. I am not vehemently anti:
using tools as you test
automating the execution of applications and
asserting on state conditions
programming as part of your test approach
specialising in more technical aspects of the
testing process
and other technical activities
I dont like the words.
I so, dont like the words, that for the last year, and
possibly longer, Ive been training myself not to say
them, not to write them, and not to use them.
I think they encourage lazy thinking, lead to poor
communication, and create unnecessary arguments
between people.
What led to this point of view?
Thanks for asking. I shall tell you what has led to my
changing my views.

www.TestingCircus.com

April 2016

- 06 -

I used to use the phrase "Test Automation". I happily


described much of my work as "Test Automation". It can
probably still be found in old blog posts and articles
because I havent edited my history to remove it. I am
self editing going forwards.

And my problem was resolved when I realised that


"Automation" was a very new word in the English
language.

Start by examining the words themselves. "Test" and


"Automation". Two words that are endemic in the
Software Testing world. And what do they mean?

Delmar S. Harder, a vice president of Ford Motor


Company is reputed to have said "we need more
automation" as he toured one of the Ford factories and
examined the way cars were produced. Because he
wanted to increase the speed of production and reduce
the variation in the production process in order to
compete more effectively in the car manufacturing
business.

"Test" is a verb. "I will Test", "I Test". A verb that has
multiple meanings and we rightly argue about its
meaning in the test community since it seems that it
could describe the essence of what we are all about.
"Test" is also a noun. "I will review this Test", "I have
written a Test", "I will execute this Test". When people
say these statements, I dont know what they mean,
because "Test" is ambiguous. Im sure there are
standards with definitions of the term. Im also sure that
when people use the word "Test" they dont always
mean those definitions, and I know that I didnt when I
used the word.
"Test" is too ambiguous. I try to limit my usage of the
word.
Is "Automation" a verb or a noun? Is it a verb? "I will
automation", "I automation". Not really, but we can
make it part of a verb phrase - we tend to say "I will do
automation", "I am doing automation". We do use it to
imply action. We do use it as a verb.

"Automation" was coined in the 1950s, it is jointly


credited to John Diebold, and Delmar S. Harder.

John Diebold was writing a book on the increasing use


of tooling and process automating in the manufacturing
industry and its impact on society in general. But he
found the word 'automatization' hard to spell, so he
used the word 'automation' and it became popular as a
result. We had not automated the spell checking process
in the 1950's.
"Automation" is a new word.
There are other words we could use:
automatization - the process of making
something more automated (a word which no
longer appears in some automated spell
checking dictionaries)
automaton - a machine which performs
functions according to coded instructions

Dictionaries typically define "Automation" as a noun.


"Run the automation", "Fix the automation", "How
much automation has passed", etc. "Automation" as a
noun is problematic because it leads to counting ("we
need more automation") rather than understanding.
Hopefully our aim when we automate is not to increase
the amount of things we have automated, in order to
count them.

These words are less ambiguous.

Hopefully, when we automate, our aims is to make our


test approach better, possibly by:

They can become ambiguous if we then couple them


with "Test".

removing manual effort that is better delegated


to a tool,

"Test Automating" suggests a process, but I don't know


what part of your test process you are automating, so it
remains ambiguous. And doesn't really help us think
about the intent behind our time spent automating.

observing at lower levels of the system,


executing a pre-defined path with more data
combinations, and faster, than a human could do
etc.
My problem with "automation" started when I realised
that no-one knew what I meant when I said the word.

automating - to convert to automatic operation


automated - having converted something to
operate automatically
etc.

Some people describe themselves as "Test Automators".


I have no idea what they do. I imagine that they
automate tests, but since I don't know how they have
defined "test", I have no idea what they do. As a role
description it is not one that I see value in. I don't know

www.TestingCircus.com

April 2016

- 07 -

how they do it. Are they a tool expert? Do they use a


specific tool? Can they program? I don't know.
I describe my ability to use code to automate
applications as programming. I can program in a
number of different languages - Java, Ruby, .Net C#,
JavaScript; although I am most proficient in Java. For
example, I use several libraries that can automate web
applications, and HTTP based APIs. I can describe my
skills in words that are easy to understand and ask
simple questions about.
We don't need to use the word "Automation" and we
don't need to create "Test Autosomething" phrases.
If I revisit the earlier questions in light of this approach:
"What test automation should we do?" and
"How much test automation should we have?"
"What should we automate?" I don't know. Where do
you perceive risks in your current test approach?
If you perceive risk because you do not have time in the
release process to revisit a lot of the application and
check that basic functionality still works, then you
might want to try and automate some of that work.
If you perceive a risk that you don't have time to explore
the system thoroughly because you spend a lot of time
maintaining the environment, uploading data, clearing
down data, installing software etc. Then perhaps you
might want to automate some of that work.
"What should we automate?" I don't know. What skills
do you have?
If you have no ability to program, and don't have
anyone on you team who can program then you are
going to have to rely on existing tools. Either
commercial or open source. But your 'what should you
automate' will be dictated to you by the functionality of
the tools that you can use. If your team can program,
then your team can write its own tools, and write them
to augment your test approach and help you investigate
risk.
"How much should we automate?" How much 'what'?
Now you have to be specific about what you want to
automate. Do you want to increase the coverage of paths
through the system? Do you want to increase the
amount of randomised data that you use? You have to
decide 'what' before you ask 'how much'.

That's fine. Its a step I have taken, and it has worked


wonders for me. If we have a conversation and you use
the word "automation" then remember that I won't
know what you mean, and I'll probably have to ask you
to explain what you mean, and you'll probably have to
use different words in your explanation anyway.
"Automation" is so ingrained in the language and
psyche of the testing world that it will be hard to
remove. But that doesn't mean we should not remove it.
Ultimately we want test processes that are effective,
tailored for our environments, and which we have taken
responsibility for. We should feel that our testing
process can contain and use any combination of: tools,
programming languages, technologies, libraries,
frameworks, techniques, documentation styles, skills,
etc. Any combination that works for us.
Avoid using the phrase "Test Automation" and more
specifically the word "Automation". This will solve your
"Test Automation" problems becuase you will never
have any "Test Automation" problems again. You will
still have problems. Very specific problems with tools
and technology. You will still have to decide which
parts of your process you automate, and where
automating will add value.
You will will have to think about your approach
carefully. But I guarantee that by avoiding the phrase
"Test Automation" you will use words that help you
think more carefully. People will still understand you.
In fact they many understand you better.
Lead by example. Join the anti-"Test Automation"
brigade, and automate as much of your working
practice as you need to.

--------------------Additional Reading
For other opinions on this subject you might want to
checkout the following resources. In some ways I am
expressing similar sentiments to, but approaching the
topic slightly differently from, Richard Bradshaw with
his desire to promote the phrase ["Automation in
Testing"] and James Bach and Michael Bolton in their
paper ["A Context Driven Approach to Automation in
Testing"]. I also have presentations on this on my
[conference talks page]

I appreciate that suggesting that you stop using the


word "Automation" is a step too far for some people.

www.TestingCircus.com

April 2016

- 08 -

About the Author


Alan Richardson has more than twenty years of professional IT experience, working as a programmer
and at every level of the testing hierarchy from tester through head of testing. Author of the books "Dear
Evil Tester", "Selenium Simplified" and "Java For Testers". Alan also has created online training courses
to help people learn Technical Web Testing and Selenium WebDriver with Java. He works as an
independent consultant, helping companies improve their use of automation, agile, and exploratory
technical testing. Alan posts his writing and training videos on SeleniumSimplified.com, EvilTester.com,
JavaForTesters.com, and CompendiumDev.co.uk.

www.TestingCircus.com

April 2016

- 09 -

Quadrants
of
Context

- Jyothi Rangaiah

Let me begin by sharing the definition of the word


context.

requirements rather than say for example: Design a


survey page

the circumstances that form the setting for an event,


statement, or idea, and in terms of which it can be fully
understood

Ask for whom / where / when should this survey pop


up, is it a pop up...

Why is defining context important?


Any problem bound by context, demands an
understanding of the same as a prerequisite in order to
arrive at a probable solution. Not understanding the
context can be misleading to anyone involved. Right
from the client, the product owner, the developer, the
tester and the others too as we are all not in a setting, a
background, or are from the same domain and have the
same experience and expertize technically or as
humanly possible.
Why do some of us do not state the context?
Shared below is my reasoning and how to overcome it.
1. We have not yet started asking relevant questions that
will allow us to build an understanding of the context.
We assume that the listener / reader understands the
context, even if do not state it exclusively.
We shy away from asking what the context is and resort
to the assumptions we made.
Ask away and it helps to clarify the assumptions made
and document the answers for the benefit of the others
involved.
2. Other symptoms of ignorance as I see with the
understanding of context is jumping the gun and failing
to recognize the important correlations if any.
Not being in a hurry to state the problem, requirements,
user stories and solutions can help. Well thought about

3. Rather than restrict the people who we work with


from not embracing change, let us be aware that
requirements change and be accommodative about it.
Being flexible and focusing on being cooperative helps.
Being transparent
communicators.

translates

to

being

effective

4. We have not tried to explain it in simple terms to


someone naive in the organization
As we learn, let us help the team learn with us.
5. Not noticing and including changes in the context as
the product matures and sticking to the context
originally defined (read as signed off requirements /
tests) can be harmful
Embrace change without being adamant.
Understanding the context of a problem is not easy if we
have not defined the boundaries within which we are
attempting to learn about it. One has to be interested in
understanding the boundaries, locales within which we
agree or agree to disagree to begin defining the context
of a problem to come up with feasible solutions.
About deriving solutions based on a context
Not undermining the value of every solution that which
we encounter along the way until we find a solution that
rightly fits in, are helpful too. As they are the outcomes
of taking the time to build and understand the product
and the context. Solutions that do not address the
problem can be discarded as we proceed.

www.TestingCircus.com

April 2016

- 10 -

contribute to the existing ideas and can add on to the


solution with their own perspective.
An urge to define context to myself and share the same
with other learners lead me to form the below quadrants
of context.
Shared below are the four quadrants of context which
can help a beginner to learn to set and state the context.
1. What is context?
2. Working together
3. As a leader
4. What next?
QUADRANT 1 - What is context?

Diversifying
If we choose to work solo at a point in time and not
mingling so much with the team because of the reason
that we like to work solo, then start by sharing the
solution with others in the team when its ready.
Together brainstorm several solutions, then implement
one feasible solution that suits the context which we
have defined together as a team and know that this too
is mutable.
Try to be flexible when you become aware of another
element that adds on to the existing context.

Consider this example as an exercise: A dog walker


We pictured something in our mind as we read the title
of this exercise.

Iteration 1
Take notes
picturized?

on

what/how/who/when/where

Work individually in this iteration.

If we are working solo, we could still benefit from the


donts or what not to implement by brainstorming.
Pick the heads with whom you wish to share your ideas
with. Dont select all or anyone at random from the
team. Involve a representative from the team who can

www.TestingCircus.com

April 2016

- 11 -

you

Fill up this post it with top 4 or 5 ideas at the top of your


head (with the mention of this exercise).
With this, we have started defining context.
Read out loud from this list and the biases / boundaries
you had as you read A dog walker
Iteration 2: Now pair up
Make another set of notes from a new view (a different
set of eyes, mind, perspective)
Add on to this list. Limit this list, based on the mission
statement you have set for this exercise.

Invest an hour in this exercise before you pick up a user


story and jump into coding and / or testing. It simplifies
and saves time at the later stages of design,
development, testing, release and post release phase if
at all we spent just enough time defining the context.
A classic example of being misunderstood by not
defining context would be Some of our tweets,
tweeting in 140 (or less) characters about a topic.
Microblogging on Twitter is very different from
blogging where we elaborately write by explaining the
context in detail. Example: Who our audiences are, what
is the purpose of the blog and the title itself being
descriptive.
How many of us would state the context of the post
being made on social mediums, which otherwise
usually tends to lead the reader/s to confusion and they
arrive at an unintended conclusion when they read your
post.

And what percentage of your audience would get it


right the first time, that which you stated as a
requirement / a problem / a perspective / a solution?

Now re-post this post / tweet by mentioning the context.

Question Why, What, How, Who, Where - This heuristic


helps us to begin to understand the context that which
we wish to define.
Plus, ask - Why not, What if, How else, Who would /
wouldn't, Where else?
Another anecdote to understand the importance of
context is shared below.
Agile testing - At the beginning and for many years in
testing, I did not know how to define agile testing.
Remain in scope, diversify, check if it still remains
relevant to the mission, decide to include / exclude a
scenario and question the same.
This exercise can help us understand how requirements
are written, which are then passed on to us to write high
level epics and low level user stories and respective
acceptance criterions. End of exercise.
Conclude
Learning about context can be made easy and it is a
good experience to share this learning with others to
gain knowledge about what we are building, right from
the product discovery phase. Try it out for yourself.

This learning came to me with the understanding of


both waterfall and agile method of software
development. Plus by not limiting to learn agile testing
from testing perspective alone but beyond it.
Pair programming, mob-programming, pair testing,
mob-testing and by listing the challenges that I faced,
helped.
Questions such as these contributed
understanding of Agile testing.

to

Whenshouldwebegintotest?
Whatandhowshouldwetest?

www.TestingCircus.com

April 2016

- 12 -

my

different scenario. Doesnt that sound like what


machines are built to do?

Shouldwetestalready?
Whoshouldbetestingthis?
Shouldwestoptestingnowbasedonacontextweas
a team have defined for this sprint?
Answer and proceed to define Agile testing for yourself.

It is about developing a sense of understanding the why


first and then follow it up with the what, the how, the
where and the who.
Why are we building this software?
What is in scope, out of scope?

QUADRANT 2 - Working together


Work as a team even with the client/s and not in
seclusion (privately).

How do we build it?


Who are our users?

Note that I did not use the word isolation (separation).


One can prefer to work in isolation but not in seclusion.
Involve the client in your discussions, decision making,
work as friends (YESSS!) and not as an enforcing
employer.

What data users feed into the database?

Be transparent, share them the cost of each activity, the


target and the timelines so they are clear from the start
to end. Surprise them by delivering what they asked for,
and by enhancing. Keep them in the loop at all times,
know that by being goal oriented and transparent in
transacting helps all involved.

Understand user behavior, derive trends, make the most


of it and yet be challenged at keeping it all simple and
ask these questions not just for the sake of doing it as an
exercise but with an intent to learn and build.

If we are developing a product for a user only, even


then that user is bound by factors such as hardware,
software, network, where the system is installed, user s
mindset. Among other factors these too play a major
role in altering the conditions of how we use a product.

QUADRANT 4 - Where do we go from here?

Why is working together


understanding of context?

important

in

the

To take the team with us. If we dont care or bother


about this, then we could just be missing out on the
education and training of our workforce.

Why did we ask this information to be input / fed and


captured?
What else can we do with this massive data that we have
collected?

Let's begin by re-introducing ourselves to context and


by defining it to ourselves.
What next?
Change the equation / definition of context. Extend it,
limit it, challenge, question, record, repeat, share,
conclude.

Conclusion
Know this

QUADRANT 3 - As a leader, how can I help my team


be context driven?
Be aware - That it is not a mindset. In today's trend, not
everyone likes to buy this sales pitch that context,
hacking, testing is a mindset.
Then what is it?
It is a skill developed by practicing, experimenting,
learning, unlearning, relearning and by interacting with
the knowledge sources.
Note that being context driven is not so much about the
mindset. It is not possible to help everyone in the team
to develop such a mindset only to be dismantled for a

Expandyourhorizonsandknowwhentostop
Letusbefairtoourselveswhendecisionmaking,so
we can be fair to others
Yesandremembertoagreetodisagreewhenwebuild
and demolish the boundaries when defining the context
Rememberifwedonotdefinethecontext,thereaders
construct a context of their own as they read the
requirements
Define context, there are chances of being highly
misunderstood if we do not define it

www.TestingCircus.com

April 2016

- 13 -

Applicability

Further reading

By not limiting the learning and the understanding of


context to building a software product but by extending
it to our everyday conversation, we can be better
equipped to state our ideas and the requirements with
finesse which is what usually leads to the birth of a
product.

Are your lights on? and An introduction to General


Systems Thinking By Jerry Weinberg

About the Author


Jyothi Rangaiah has been testing to learn and learning to test software since 2005. The editor of Women
Testers; and a rebel by nature against poor practices and misinformed beliefs in testing. Jyothi has
written for testing e-magazines and been featured in Testing Circus and Teatime with Testers. A
challenge seeker and a risk assessor, she currently tests mobile applications and web applications. An
agile enthusiast experienced in building agile teams, Jyothi encourages the growth of testing skills such
as analytical and critical thinking. She tweets at @aarjay.

www.TestingCircus.com

April 2016

- 14 -

To Document, or not to Document:

How Much
Documentation should
Teams Produce?
- Leanne Howard
Knowing how to create documentation is something I
am often asked about, having worked with many teams.
It is also one of the most challenging aspects of Agile
that teams struggle with, particularly coming from
traditional projects. The answer to the question of how
much documentation to create is it depends. This
article will not give you the magic formula, but
hopefully provides some guidance which can be
applied with a bit of good old common sense.
Starting at the high level, here are some dependencies to
consider regarding the amount of documentation
required:
More documentation
Newly formed teams may be required if levels
of trust have not yet been built
Compliance driven project - need to provide
high levels of traceability to individual
compliance requirement
High turnover of staff - conversations cannot be
transferred efficiently to new staff without
providing it in writing
Non collocated - sharing any information face to
face can be a challenge
Less documentation
Well established teams collaboration patterns
established
No requirement to meet external standards informal conversations are good enough
Stable teams - The IP is retained in teams heads
(although this may be a separate issue)
Collocated - All information is shared across the
team

Start with high level stories


There is no point in writing any level of detail if the team
is going to get regular feedback and potentially move in
another direction, as some of the lower Business Value
Stories (or epics at this stage) may never get worked on.
While there is nothing more soul destroying than seeing
all your hard work not being delivered, it is important
to think of these stories/epics as placeholders only.
These may have been requirements at some point in
time; however with the fast feedback loops in Agile,
things can change rapidly. Ideas that seemed good at
the time may now have little or no value.
Understand the required quality attributes
An often overlooked area when capturing user stories
are the non-functional or quality attributes. The
standard ISO 25010 provides a good description of these
attributes which can almost be used as a checklist.
Certain words that people use when describing
requirements, which should trigger further questioning
are easily or quickly. These then need to be converted
into non-functional requirements with testable
acceptance criteria.
Lets take these quality attributes and look at them in
further detail:
Easily: alerts you to the fact that there is likely to
be some useability criteria, or at least to remove
this wording if this is not the case. We could be
talking about how the use of the application is
first learnt. Is the user able to navigate through
in a logical way? Can they understand what to
do if warning messages are displayed?
Quickly: implies a performance attribute
requiring further questioning to find the
expectation of the user. There may even need to
be some expectation setting and negotiation, it is
good to start these conversations early.

www.TestingCircus.com

April 2016

- 15 -

Non-functional requirements are worth documenting


completely so they can be discussed at the start of the
project and continuously factored into the product
throughout the build. Non-functional attributes are
often missed until well into the project, until they
become more difficult and expensive to retro-fit. These
quality attributes are also the ones that may cause
dissatisfaction with your customers if they are not
considered before the product is released to production.
Manage documentation within teams
An area where documentation is important is where
multiple teams are working on an application. Decisions
made by one team could impact others that integrate
with it or use a shared service. The interconnected teams
need to agree an appropriate level of detail that meets
both the teams needs, without causing too much of an
impact on the team velocity by requiring documentation
in minute detail.
Challenges with team documentation
When teams initially move to Agile, difficulty is often
encountered when judging the amount of
documentation needed in order to get started. Using the
traditional mindset of wanting to document as much as
possible before handing it over to the team, the Product
Owner or BAs might feel challenged if the rest of the
team start to ask them questions. It might feel like they
have not done their job properly and correct level of
detail has not been defined if there are still questions.
Education may be required to encourage questions, and
collaboration to elaborate on lower level detail
requirements that are part of the Agile process. BAs can
spend too much time going into detail on all the stories,
which sometimes slip into providing solutions, when in
fact they are a placeholder for further conversation
when stories are about to be consumed by the team.
The above is not only a challenge for the BAs but also
developers and testers new to Agile. Being used to high
volumes of detailed requirements which they are not
going to get can be an uncomfortable experience having

come from a traditional background, and a balance


needs to be struck between team members. Just enough
documentation to get started, but enough to provide
some security in the level of detail even if this is only
perceived. This should naturally become less as the
team matures.
Capture information in different ways
I like to see teams capturing information in different
formats, rather than pages of text. One technique for
recording lots of discussion in a summarised format is
the mind map, allowing conversations to flow,
providing an easy display of information and outlining
important relationships. We all know that a picture can
capture a thousand words. Drawing diagrams of
progress flows or mock screens and pages can help
consolidate team news and promote questioning.
Someone in the team just needs to convert this into a
digital form, which could be as simple as taking a photo
and placing this in a shared project space, such as a wiki.
The level of documentation is a likely topic to appear in
retrospectives. This can easily become something that is
set at quite a large volume, remaining like this unless it
is challenged. Keep this in mind when thinking about
fast feedback loops and striving for continuous
improvement. Only detail information when it is about
to be consumed and not a lot of upfront documentation.
Just in time at just the right level of detail.
Conclusion
To answer the question about how much
documentation a team should produce, unfortunately
there is no rule which says a certain size fits a particular
situation. Even when you find the right volume, this
needs to be challenged as the team matures, and the
amount of documentation will eventually move to just
enough. There are some ideas contained within this
article that should help you get started and can be used
within your retrospectives to continuously improve in
this area.

About the Author


Leanne Howard has over 25 years experience as a Test Manager, Principal Test Consultant and, most
recently, Agile Practices Consultant. In her consultancy role (and as the author and chief examiner of a
number of agile and testing courses) Leanne has gained a unique insight into the issues and frustrations
that beset teams and individuals implementing or seeking to uplift agile capabilities. Leanne is a keen
member of the larger IT community through meetup groups, the ACS, SIGs and writing articles to share
her insights into her craft.
www.TestingCircus.com

April 2016

- 16 -

Communicating

Context
- Erik Davis
No, this is not another article about the importance of
context to software testing. Well not exactly. There are
plenty of those out there already and I dont need to
attempt to rehash what others have said already.

The problem in this case, is context. Or more correctly,


the lack of expressed/understood context between the
presenter and the audience.

Instead, I want to talk about communicating context


while writing and speaking (and reading and listening)
about testing, though I suppose it would apply to any
topic really. This didnt even register as something that
needed written about until a recent batch of examples
made me stop and think.

Be aware that your experience, regardless of how vast it


may be, is yours, and not necessarily that of everyone
else in the room. It would be helpful to explicitly convey
the context in which you felt the methods you discussed
could be beneficial. You talked about your environment
and what you did to make things work, but later when
you said ... and you should try it too you could have
added something like ... if you are in a similar situation
with similar software. Or prefaced the talk with this is
the environment in which I work, if yours is
dramatically different, your mileage may vary. Keep
this in mind while crafting your message, and after you
write/present in case you end up with people reacting as
if you called their baby ugly.

Frequently, when discussing an article with others, or


attending a session at a conference, I see people react to
a specific idea or phrase negatively. Sometimes this is
because the audience member fully understands, but
still disagrees with the ideas presented. Other times
though, the reaction is caused by a communication
breakdown between the speaker/writer and their
audience. The listener does not understand the
speakers context well enough, is not applying their own
context to the ideas presented correctly, or both.
The most recent example I can bring to mind was a talk
at the CodeMash conference back in January. The talk
discussed a testing teams approach to testing a product
that releases many times a day. The speaker was
advocating for others to try the approach their team
used.
Afterwards, several people huddled near the back,
emitting rumbles of that couldnt possibly work and
theres no way they actually do that. It was obvious to
me that at least some of the attendees didnt pick up on
the context clues in the presentation. They simply took
the speakers words as is. We follow these steps and
you should too. Without thinking about it, some in the
audience applied those statements to their own
situation and things didnt line up. For them, it was like
trying to thread a needle with an elephant.

For the presenter

For the audience


You need to think about what you are taking in. You
should apply some thought to what is being written or
presented. Not everyone is going to remember to add all
of the details you need to hear to properly understand if
an idea applies to you or not. There wont always be
someone sitting a row ahead of you that can turn
around and add The ideas presented might apply well
to a web app built and tested by a relatively small team
that releases 25+ times a day, but since you work at a
company with a much larger product under test, that
spans multiple platforms and releases much less
frequently, created by a much larger team, its not likely
these ideas will work well for you. Should you end up
in a conference talk like this, that does not fully apply to
you, stop and think about your situation, and the
context from which the author derived their ideas before
shouting thatll never work. It may be true that their
ideas wont work for you. But some portion may work

www.TestingCircus.com

April 2016

- 17 -

for you if you actually understood what was being


presented.
For everyone
Be aware of your context and how it affects what you do
and biases the ideas you develop. When communicating

with others, be sure to discuss context as it is likely that


at least some part of your context is different from
anyone else you ever talk to. This will help you limit
miscommunications and hopefully allow you to more
easily spread your ideas to others.

About the Author


Erik has over 16 years of experience in and around software testing. Hes been everything from a junior
tester to a manager of managers. Currently, Erik is focused on automation. One of his two teams are
responsible for the portion of Automation that resides in the Test department of Hyland Software. His
other team handles all internal education his companys 200+ testers. Erik has been an active conference
speaker since 2013 and is currently a Director and Executive at Large with the Association for Software
Testing. He tweets at @erikld.

www.TestingCircus.com

April 2016

- 18 -

Interview with
Testers

DAN ASHBY
AstraZeneca
Software Quality and Testing Capability Lead
London
Dan is in the Senior Leadership team at AstraZeneca.
Hes been testing for over a decade, working on a wide variety of
products from printer software/hardware/firmware, to web and
mobile apps and sites of all different shapes and sizes.
Dan is passionate about context-driven testing and is currently focused
on testing web-based software while coaching/training people in
software testing and agile. Dan is involved in the testing community
and regularly speaks at conferences. He also blogs (danashby.co.uk), is
the co-host of the Testing in the Pub podcast series
(testinginthepub.co.uk) and runs he Software Testing Clinic
workshops within London (softwaretestingclinic.com).

* Interviewed by Srinivas Kadiyala


1. You are an international speaker, a writer and a
tester. How did all these begin?

responsibilities
engagements?

Similarly to most people, I fell into testing. I actually


studied Electronic and Electrical Engineering in my
home city of Glasgow in Scotland. When I
graduated, I applied for lots of jobs with Engineer
in the title and the one I managed to land was as an
Evaluation Engineer, which was a really fancy
name for a tester! I really enjoyed learning about
Software Testing as part of this role. It kick started
my career and I havent looked back.

10 years has flown past! Ive never taken the leap


into the contracting world but I have experienced
being in a consultant role on a permanent basis
when I worked at Lab49. We worked with various
financial clients building bespoke software. I had
the role of Deputy Head of Quality which also
allowed me to continue the coaching side internally,
which I enjoyed.

I eventually found the testing community when I


moved to London and it opened up a whole world
of learning and talking about testing. I owe a lot to
the community for getting me to where I am.
Regarding speaking and writing, being involved in
the community made me realise that I had a few
stories to share and so I plucked up the courage to
blog and get up on stage to share them. Initially that
was terrifyingbut I really enjoy it now! I am
addicted to public speaking now, although its a lot
of work but my Mum and Dad always said that to
go far in life you need to work hard!
2. You have been in testing since a decade, what are
your recent engagements in testing and

different

from

previous

I have recently moved to AstraZeneca into a GlobalHead /Senior Leadership position. AstraZeneca are
a top pharmaceutical company. Ive worked for a
pharma company in the past so its not an entirely
new domain for me, but I'm sure it will certainly
bring new challenges, new responsibilities and new
experiences.
Coaching is something I have been doing for a good
few years now. Ive worked in many domains too:
finance, medical/pharma, ecommerce, government,
energy, and on hardware, software and firmware
for a big printer manufacturer. I still like getting
hands on though, either by getting involved in
projects or through pairing as part of the coaching
I'm doing.

www.TestingCircus.com

April 2016

- 19 -

DAN ASHBY

Regarding the community, I have my fingers in


many pies at the moment I co-host the Testing In
The Pub podcast with Steve Janaway, which is fun.
Steve is a great guy and hes taught me a lot through
my career too, so I'm really happy to be doing the
podcasts with him. Im also co organising the
Software Testing Clinic with Mark Winteringham.
Mark is awesome! He knows a lot of stuff and hes
itching to teach people, so the clinic is perfect for
him and its great fun to be running it with him.
Additionally, Im speaking at lots of conferences,
meetups and at brown bag learning sessions at
various companies. Keeping busy and generally
trying to tell my stories wherever I can. I'm enjoying
it. I am relocating to Cambridge in the UK soon, so I
will hopefully break into a few more community
events there too.
3. You are the speaker in Agile Testing Day 2016 on
How Ignorant Are We? Can you share some
insights?
I am speaking at ATD Scandinavia. The talk is based
predominantly on the 5 orders of ignorance
regarding information. I will relate that to software
testing and Ill also discuss a model that I created
(thanks to the help of some feedback from friends in
the community), which focuses on information and
its relationship with testing and checking. I think the
model might help people see the distinction,
especially with the focus being specifically on
information.
I am really excited about doing this talk. I have
gotten into the habit of hand drawing my slides too,
so it should be fun to present and hopefully the
slides will get a few laughs.
4. You have been a coach, mentor, and guide to
many people What is the most common problem
that you fix in those people?
Great question and I'm so glad you asked this! One
of the biggest problems I see that people have is

how to talk about testing. I have mentored a few


people some on testing, some on public speaking,
some on agile process or topics like writing feature
files, etc. One thing that continually comes up in all
these conversations is the mentees stories about
problems relating to talking to other people about
testing. Be it with other team members,
management,
clients,
friends,
family,
whoevereither caused by being challenged about
the value that testers bring, or from it stemming
from curiosity. There seems to be a struggle
regarding being able to talk about what they do and
the value it brings.
I dont think its about fixing people. I think its
about teaching people and sharing stories with them
to supply your own knowledge. They can then use
this to practice talking about testing with you to
gain some skills and a little experience with dealing
with these situations (although mimicked in the
mentoring sessions). This hopefully builds their
confidence in being able to talk about testing so they
can take this back to their situations that they see
themselves in.
I like to talk about communication at conferences
and as part of workshops. The words we choose to
use have an impact on how someone understands
what were trying to say. Sometimes good words
are hard to find. Sometimes we like to use words to
sound intelligent, but sometimes that can confuse
things and make matters worse. And most people
have a fear of asking what something means as no
one wants to look stupid.
Take the word performance for example; I was
once in a situation where a team that I worked in
was talking about a user story which was focussed
around performance. The team had been talking
about this story for a week. I joined them and
immediately asked what we meant by the word
performance. Some people gave me a look as if I
should automatically know what was meant by the

www.TestingCircus.com

April 2016

- 20 -

DAN ASHBY

word isnt it obvious? When I pressed them for


an answer, 3 people spoke at the same time and they
all gave completely different answers. One said
concurrent user load, one said data load and the
product owner said page response time for a single
user.
They all looked at each other in disbelief. I fought a
smile from showing on my face and I wanted to
shout out this is why you should have testers
involved to ask questions that were not afraid of
asking!, but I could sense that they understood the
value at that moment already.
Anyway, I think testers need to have good
communication skills and a good level of
knowledge to be able to fluently talk about testing
as part of their role. Which words to pick and use
and how to describe things effectively and
efficiently.
The community helps with this! It is a pretty safe
environment for being able to practise and also
listen to other testers to see how they talk about
testing.
5. You have done a master class session on How
Dan Interviews Testers. What are the good
practices you use in hiring process?
The masterclass session was brilliant! I really like
what Rosie Sherry and Richard Bradshaw have got
going on with the Ministry of Testing. I am sure
theyve got some great sessions coming up. If you
have a pro membership, there is a host of info that
you can get from past masterclass sessions to whole
courses.
My masterclass session contained some good stories
about how I interview people. I spoke about a few
models that I utilise when thinking about
professional testers and I presented my mind map
that I blogged about back in December. I think the
best advice I can give is basically to have a
conversation. Build a rapport, and although you

might have a list of questions, dont treat them like a


checklist. Let the conversations flow. The idea is that
you are trying to investigate to uncover information
about the candidates skills, knowledge and their
experiences that back up their skills and knowledge.
You also want to discover information about their
attitude and personality.
You should also think about the perspectives of
whats involved in the role. The candidate isnt just
a tester. They work in a team, interacting with other
teams within an organisation. Possibly working
with clients and they are hopefully part of a
community or two.
I cant advise on specific questions to ask. That is
completely dependent on your context. Keep the
flowing conversation in mind and test the candidate
to uncover that information. And if youre a
candidate in an interview, test the company back!
6. How do you keep up to date with the current
technology changes?
I like reading blogs and I am also active on twitter.
These are huge resources for learning about changes
in technology and they are my main sources for
keeping me up to date.
People that know me personally know that I'm a big
fan of the latest Microsoft products and I'm really
excited about the Microsoft HoloLens thats coming
soon. It is essentially a headset that displays
holograms in your field of vision different from
completely immersive VR headsets as the
holographs are projected or pinned onto your real
environment. I think its incredible where Microsoft
are going with this, as they are unifying their
Windows 10 app store so that you create one app for
all windows 10 devices. HoloLens is a Windows 10
device so will run any Windows 10 universal app.
I would be really interested in testing some apps on
the HoloLens. I think testers will be starting to test
apps on the HoloLens devices in the near future. It

www.TestingCircus.com

April 2016

- 21 -

DAN ASHBY

will bring more stories to share within the


community too.

read for anyone involved in working in the software


industry.

The community is where I learn the most about new


things in testing. Whether its new processes, testing
models, new heuristics, new tools, test ideas or a
new game that might help for coaching, the
community is a fantastic source of information and
is very valuable.

- Explore It by Elizabeth Hendrickson: Every


software tester needs to read this book. It is a very
easy book to read too. There are no excuses not to
read this book.

7. How did Testing in the Pub podcast has started.


How it is unique from others currently available?
Steve has been a friend for years now. We met on
the Rapid Software Testing course back in 2011. We
both kept bumping into each other in the
community too. One day, Steve asked me to come
and work at Net-A-Porter with him and I made the
move. It was fun! I met so many good testers there
and some great friends.
Steve and I sat next to each other and would have
these epic conversations about testing. We had the
idea of starting to record them on a phone and put
them out as podcasts and I'm glad we did. It is great
fun and people seem to really like our
conversations.
We expanded into interviewing testers and I think
we are up to almost 30 podcast sessions now.
I think Testing In The Pub is different because its so
casual and relaxed. We usually focus on a general
theme for each session, but nothing is scripted and
the off the cuff conversations flow really naturally. I
think that is why its easy listening and people like
it.
8. Which five books every software tester should
read and why?
I enjoy reading. I would say my top 5 books that
anyone involved in software should read are:
- Perfect Software and other illusions of software
testing by Gerald Weinberg: This is an essential

- Lateral Thinking by Ed de Bono: This book is a


little harder to read, but well worth it. The book
goes into detail about different thinking skills. Its a
psychology book and is very important for software
testers as the thinking skills described in this book
are a key trait for any software tester. This book isn't
just for testers though, it is for everyone.
- Tacit and Explicit Knowledge by Harry Collins: I
mentioned previously about my model regarding
the relationship between information, testing and
checking and this book is all about information and
knowledge. There are different types of knowledge.
Its so important to understand these types of
knowledge as the information in this book really
relates to software projects. We develop software
based on the information that we have through
various artefacts, which guides our development,
testing and checking activities.
- Influence: The Psychology of Persuasion by
Robert Cialdini: Another psychology book. This
book is one of my favourites. As testers, a huge part
of our role is to talk about testing (I mentioned this
previously). There are tons of misconceptions that
people have about our craft regarding what we do
and the value that we bring. We need to be able to
influence those people. Thats part of our role now
as modern testers. This book will definitely help you
with this.
I realise Ive supplied 3 psychology books for testers
to read A couple of testing books that Id like to
squeeze in if I can:
- Agile Testing by Janet Gregory and Lisa Crispin:
Janet and Lisa are amazing people. They are really

www.TestingCircus.com

April 2016

- 22 -

DAN ASHBY

inspiring and their book is the same. Its a big thick


book, but dont be put off! It is filled with valuable
real case studies and information, along with some
great models like the testing quadrants, etc. Its well
worth opening and once you get started you wont
regret it.
- Lessons Learned in Software Testing by James
Bach, Cem Kaner and Bret Pettichord: This is
another fantastic book. It is very easy to read and
you can jump in and out and hop around within it.
It consists of 293 extremely valuable lessons. They
are all bite sized and very easily digestible. It is a
book that every tester should have to hand to
reference when needed.
So thats the 7 books in my top 5. (They regularly
rotate, so its allowed!)
9. How did software testing clinic started? Is it
available online?
The Software Testing Clinic is essentially a safe
environment for people to learn about testing and
for mentors to gain mentoring experience. It is run
fortnightly and is completely free.
It was formed as a result of Mark Winteringham
and I having a conversation in which we said that
we wanted to put together a curriculum to teach
testing to anyone who is interested, in a safe
environment. We also had the idea of putting
something together for people who are experienced
to could come and learn about mentoring and gain
some mentoring experience in a safe environment
for free too. So naturally the lightbulbs flashed in
our brains and we set the wheels in motion to form
the clinic.
We quickly created the SoftwareTestingClinic.com
website, added a mission statement on the site and
we formed our twitter account (@TesterClinic). We
then set about finding some sponsors that would be
able to provide us with a free space, food and drinks
too. Next, we planned our first handful of sessions

and set up a registration page on eventbrite.com and


advertised the sessions on meetup.com and via the
twitter account too (@TesterClinic).
It goes to show how much people have been crying
out for this kind of freely available workshop as its
been very popular. We have run 3 sessions so far:
the first on What is software testing?, the second
on Testing Requirements and the third on
Oracles. There are many more sessions in the
pipeline which you can read about on the website.
We plan to continue running the clinic indefinitely,
as long as students and mentors keep showing up,
keen to learn and getting value from the sessions.
10. How do you plan for career progressions in
testing? What advise you would provide for testers
in community?
There is a lot of talk about testers needing to learn
how to code and skillsets moving to automation. I
think thats pushed from people who dont fully
understand testing. I think the most important skills
that are needed to become a professional tester are:
Lateral and Critical thinking skills, great
communication skills, influence skills, people skills,
knowledge about the craft of testing, knowledge
about the value that testing brings and knowledge
about how our contexts affect our testing. All these
things form the basis of good testing and you will
go far with these building blocks.
Coding helps! Richard Bradshaw has spoken a lot
about how coding and automation can assist your
testing tremendously and I completely agree with
that. I wouldnt say its specifically needed to
progress in your career but it is valuable.
I dont code much myself. I can write some Ruby
scripts but I wouldnt have confidence in building
frameworks, etc. I am part of a team and the team
has the goal of building a quality product. My team
realize the value that I bring in helping us to achieve
that goal. I am sure of that because I am able to talk

www.TestingCircus.com

April 2016

- 23 -

DAN ASHBY

to them fluently about the value and necessity of


investigating software, the designs of the software
and the ideas of the software to teach them about
that value that testers bring.
I think one thing that will definitely allow you to
progress in your career, is the community. Building
your profile and sharing your stories, as well as
plucking up the courage to get up on stage and
speaking at conferences and blogging too. All of that
helps to build your profile within the industry. It all
gets noticed and it puts you in a position where you
are able to show your leadership skills and start
helping to teach and mentor other people to help
them.
I am positive that my involvement in the
community and my activities that Im involved in
has definitely helped me to climb to where I am
moving into a global leadership role with a top
pharma company. Thats why I feel thankful
towards the community and definitely want to give
back so that I can help others too.
11. Name few people in testing, who influenced you
directly or indirectly in your career as software
testing professional.
Obviously, Im a fan of Michael Bolton and James
Bach, Janet Gregory, Lisa Crispin, along with many
other big name worldwide testers
A few people that have really inspired me in my
direct vicinity within the testing community are:
Tony Bruce, Steve Janaway, Richard Bradshaw,
Mark Winteringham, Toby Sinclair, Christina
Ohanian, Duncan Nisbit, Danny Dainton, Rosie
Sherry, Lauren Braid, Paul Coletti, James Lyndsay,
Amy Phillips, Dan Billing, Ale Moreira, Simon
Franklin, John Stephenson, Chris Chant, Emma
Armstrong, Alan Parkinson, Bill Matthews, Gus
Evangelisti, Jean Ann Harrison, Santhosh Tuppad,
Patrick Prill, Alan Richardson, Steve Green

There are so many more people that have inspired


me throughout my career - too many to mention!
The people that I have mentioned above are all great
people to connect with in the community, follow on
twitter and read their blogs.
12. Five things nobody knows about you --------I used to be in a band (I played bass). We made it
quite far! We played on the TV and recorded a few
albums and also toured, the highlight being playing
at the London Astoria.
I lived in Shanghai for a year. It was amazing to
experience a completely different culture and I even
managed to pick up a few Mandarin phrases,
although at times the language barrier did have its
difficulties. I couldnt get a job in IT so I taught
English language in schools I like to think that
there are a few Shanghainese children speaking
English with a Scottish accent out there!
I am moving to Cambridge very soon. Im really
looking forward to the move and meeting some new
communities too. Ill still remain in the London
communities too as London is only 45 mins away on
the train.
I have been with my wife for 14 years and we
recently got married last September. Married life is
great! My wife is very supportive of my career and
actually joined one of the recent Software Testing
Clinic sessions to get an insight into the testing
world.
I am a huge whisky fan. Similarly to the books I
mentioned, I have about 9 or 10 whiskies in my top
5. Im sure this is something that people know about
me already though.
13. Usually our last question. Do you read Testing
Circus Magazine? If yes, what is your feedback to
improve this magazine?
I really enjoy reading the Testing Circus magazines.
It contains many inspiring stories within each issue.

www.TestingCircus.com

April 2016

- 24 -

DAN ASHBY

I regularly share the magazine around to colleagues


too.

publish a few of the top answers in the next issue,


along with a new question/topic.

One possible idea to add to the magazine could be


to introduce a section where you ask a series of
questions about an important topic in the testing
world and get people to write in their thoughts on
the topic, or their answers to the questions. You
could either tie it in with twitter, or create an online
forum for the answers. The magazine could then

I think this would be quite cool, and might offer


another type of interface for people to learn.
Blog/Website: DanAshby.co.uk,
TestingInThePub.co.uk, SoftwareTestingClinic.com
Twitter : https://twitter.com/DanAshby04

www.TestingCircus.com

April 2016

- 25 -

Doing business without


advertising is like winking at a girl
in the dark. You know what you
are doing but nobody else does.

https://www.testingcircus.com/advertise

Preparation over
Planning
An Agile Test Strategy
- Adam Knight
When I started out in software testing, test plans and
strategy documents played a central role in the testing
process. On larger projects the test plan and test strategy
were separate pieces targeted at different levels, but on
many of the smaller projects and teams that I worked on
these terms were ubiquitous and related to a single
document. Despite the bad press that such documents
may now receive, at the time the creation of a test
strategy document wasnt seen as a chore, or a necessary
evil. The creation of a test strategy was in fact a highly
prized responsibility for those working in software
testing. Creating the test strategy was the preserve of the
test team lead or test manager, a mark of status on a
testing project. Due to the long duration of many
development projects, the opportunity to create a test
strategy did not often arise, yet it was the very length of
these projects that really inspired the desire to create
them. In reality it was often the case that the strategy
was not referred to once the development had
commenced. The strategy often contained a lot of
standard boilerplate content and it was common for
the actual testing that was done often to deviate
significantly from that which was defined in it.
Inevitably any detailed plan was rarely maintained with
the decisions that were made as the project progressed
and the need for such decisions emerged. All of these
eventualities undermined the validity of the creation of
an up-front strategy, however the fact remained that a
significant commitment of testing time and resources
was about to be made to a long term project, and the
creation of a strategy provided the people making that
commitment with a sense of confidence in
understanding how those resources were to be used.
If we skip forward to today, much of the software
development landscape looks very different. Many
organisations have, in some form or another, adopted

iterative Agile development methodologies such that in


the 2016 State of Testing survey over 80% of
respondents reported that they were working in an
Agile or Agile-like process1. Inherent in such
approaches is an acceptance of change throughout the
software development lifecycle. Change in priorities,
change in implementations, and in the domain of lean
start-ups even changes in the target market or purpose
of the software being created. In the face of such
volatility, and a manifesto that advocates working
software over comprehensive documentation, it was
somewhat inevitable that the test strategies and plans
should fall out of favour.
The decline of the test strategy and planning documents
has left something of a vacuum in our ability to define
our approach to testing. Is it the case that we don't need
a test strategy in Agile? Not at all, I've spoken to Agile
teams who have struggled specifically through the lack
of a coherent testing strategy. Our ability to understand
our testing strategy is still as important, if not more
important, than it ever was. What is needed in Agile
organisations is an approach to establishing a testing
strategy that is appropriate to a culture of low
documentation, high levels of team autonomy and
constant change.
Missions At Many Levels
Testing in an agile context comes with an expectation of
ongoing change in the development activities that
testers are involved in. Whether working in sprints or
value streams, the constant shifting to focus on
emergent customer value provides an ever moving
target to challenge any attempts at coordinating testing.
Now more than ever I believe that we can consider not
just the low level interactive testing, but the entire
testing effort for a product or organisation to be an

www.TestingCircus.com

April 2016

- 27 -

exercise in exploration. The individuals involved in


testing on an agile development are on a constant
voyage of discovery to understand the behaviours and
risks that are being created or uncovered on a daily basis.
It has always been the case that the testing activities of
individuals and teams can be viewed at different levels
of coordination within the overall testing effort, such as
projects, requirements and test phases. In Agile things
are no different, however the levels are less rigid than
the test phases of a more formally structured
development project. Most Agile developments will be
organised using a loosely related structure of artefacts
to allow co-ordination of development activities. Each
artefact will have corresponding testing responsibilities,
for example to coordinate testing activities we may
consider :

an testing department working on a product

the testers in a development team working on an


epic or feature

an individual working on a user-story or bug

These hierarchies are typically not rigid - stories and


bugs may exist outside of the scope of epics or features,
for example. Nevertheless the higher level activities will
usually comprise a number of smaller scoped testing
activities against lower level artefacts.
A different way of looking at this is that any single
testing activity may concurrently be all of these layers at
once, the appropriate one dependant on the level at
which it is being considered. A tester working on a story
could be an active part of a team developing an epic
who are themselves a part of a department who are
working on an entire product development.
When discussing this idea with David Evans (of Neuri
Consulting) he recounted to me a highly appropriate parable

items they are working on, such as user story, and also
the higher level features and products that this is part of.
For me this is a very real risk in Agile teams. The narrow
focus on the testing of thinly scoped, low-level user
stories can lead to emergent, performance or
architectural risks being overlooked simply because
they werent obviously part of a specific user story.
Whereas these activities might historically have been
pre-defined as hierarchical elements of the testing
strategy or plan, in Agile development a more
appropriate approach would be to consider these as
elements of a testing mission at the relevant level. The
teams and individuals involved in the testing may not
know specifically which products or features they will
be testing in advance, instead the parameters of this
mission will emerge and evolve as the product develops.
In exploratory testing approaches, the idea of a testing
charter is a commonly used concept to scope a testing
activity which is not-predetermined but has clear scope
and focus. In her excellent writing on Exploratory
Testing3, Elizabeth Hendrickson describes the
characteristics of an exploratory charter as fitting the
following pattern:
Explore an Area
With Resources
to discover Some Information

This structure was introduced to provide a useful means


for describing low level testing charters when
performing exploratory testing. I personally find that it
is also a valid exercise to apply this pattern to define and
distinguish higher levels of testing, such as the testing
missions of teams or departments working against the
wider scoped testing artefacts.

of three stone-cutters2:
An old story tells of three stonecutters who were
asked what they were doing. The first replied, I
am making a living. The second kept on
hammering while he said, I am doing the best job
of stonecutting in the entire country. The third
one looked up with a visionary gleam in his eyes
and said, I am building a cathedral.
In order to define an appropriate testing
approach those responsible for testing need
to understand at any point in time what their
responsibilities are towards testing both the low level

Fig1. Test Activity Considered at Different Layers

www.TestingCircus.com

April 2016

- 28 -

At each level those responsible for delivering the testing


will have a defined scope or area they are testing, they
will need established individuals and resources
allocated to do this, and they should have an
understanding of the types of problems that they are
looking for. These characteristics need to be validly
identifiable at every level at which we might need to
coordinate testing activities if we are to avoid teams
missing important areas of testing.
Importantly here, the testing should not be limited to
the functionality delivered in the user stories. As
mentioned earlier, there is a real risk in Agile
development that the only testing that is performed
relates to thinly sliced user stories. Those responsible for
testing any product or feature need to consider other
characteristics and risks that merit testing attention,
such as performance and 'ility' testing (usability,
stability, scalability, security), architectural risks and
operating system interactions that may more
appropriately be targeted against the higher level
artefacts such as epics, features and products.
Another characteristic of these testing missions is that,
whilst it might be possible to identify many of them in
advance of a development, many of the missions will be
generated through the course of the development
process. Bugs are found and risks uncovered that merit

testing attention, and the testing of these needs to be


included under the scope of responsibilities of those
involved in testing the product.
Fig2. Test Activity Layers with Emergent Activities
Not every piece of work in an Agile process needs to be
a user story, and testing tasks, whether planned or

discovered, may need to be included in a teams backlog


if the scope of the testing 'mission' is to be covered.
I find that looking at testing in this way provides a
useful perspective in the nature of testing in Agile
developments. It provides a lens through which the
testing activity on an iterative development may be
viewed. In itself it is not a test strategy, but this kind of
perspective does help to understand the nature of
testing in Agile such an approach to a test strategy
might be defined.
The Living Test Strategy
In 2013 I wrote an article for a friend of mine, Rob
Lambert, on the concept of a "Square Shaped Team"
which seemed to resonate well with a lot of testers4. The
concept of a T-shaped individual was one that I'd
encountered a number of times, as a model for defining
someone with a solid set of broad skills combined with
a subset of core skills which they excel at. The idea of the
Square-Shaped team came out of my belief that the
teams most capable of achieving a goal are those made
up of a group of individuals with complementary skills
that can combine to achieve the goals of the team. This
concept applies particularly well to testing, and I believe
that the teams most capable of testing a product are not
constructed from a group of similarly trained carbon
copy testers, but instead a range of individuals with
skills and knowledge to assess
the product in a range of ways.
Inevitably the appropriate
skills for testing will depend
heavily on the software and the
context in which it is
distributed and used. We
cannot expect to deliver a
comprehensive testing effort if
those responsible for testing
don't possess the range of skills
required to perform an
appropriate variety of testing
activities. We may require
coding knowledge to develop
scripts and tools, or possibly database knowledge to
examine database performance, or OS skills to assess the
impact the software has on the environment, or possibly
all of these things and more besides.
The identification of relevant skills, and allocation of
individuals responsible for testing across teams is

www.TestingCircus.com

April 2016

- 29 -

therefore a key element in testing strategy. Just as the


strategy of a sports manager is embodied in the team
that he or she puts on the pitch, so a test strategy is
embodied in the individuals that we place in the testing
roles in our organisation. The individuals that we hire
and train to deliver the testing in our Agile
organisations form a "living test strategy" that
represents our approach to software testing.
An Understanding Of Responsibility
Having the right skilled individuals unfortunately does
not guarantee success. As any sports fan will know, just
because you populate a group with talented individuals
does not mean that they will be successful as a team. A
crucial requirement to success is that each team and
individual has a clear understanding of their
responsibilities and the responsibilities of others. If this
is not the case then there is a real risk of either
duplication of effort, which is wasteful, or worse the
missing of important areas of testing.
It is not necessarily the case that any of all of these
responsibilities fall to devoted testers. It may be that the
software context merits a high level of devoted testers,
or with simpler products it may be possible for the
software developers to undertake some or all of the
testing. Whatever the approach, the hiring of
individuals and allocation of testing responsibilities
needs to be done with an understanding of their part in
the testing strategy, such that the strategy is embodied
and understood throughout the organisation.
Achieving this understanding is no trivial undertaking
though. As we've established, the nature of Agile
developments is constantly changing. If we are too
prescriptive around the allocation of responsibility then
we run the risk of emergent needs not being addressed
as they don't fall within the scope of responsibilities that
we have defined.
This is where the multi-layered view of testing activity
that I have defined above can help us to identify
responsibilities at the appropriate levels. Our Agile test
strategy must not only ensure that the responsibility of
each person testing a user story is understood, but also
that there is a clear allocation of responsibility at each of
the higher levels that we wish to co-ordinate testing,
whether to an individual or a team. At each of the
product, feature, epic and story levels the individual or
group responsible for coordinating the testing must
have a clear understanding of the scope of their testing

mission in relation to that level. They must understand


the feature area to cover, the resources at their disposal,
and the type of issues that they are expected to look for.
Conveying the understanding of expectations is an
important role for those responsible for test strategy. I
have found that working together as a testing group to
establish a set of principles is an effective way of
establishing a common understanding of responsibility
in the testers across an organisation. I have found that a
set of principles established early and revisited through
regularly face-to-face testing-oriented discussion, either
as a whole testing team or through specialist interest
groups, is a very effective way of establishing and
maintaining an understanding of responsibilities.
Facing New Challenges
As important as structuring the testing approach for the
current demands at any point in time is the need to
build autonomy into those responsible for testing to
meet new challenges as they arise. This is achieved by
ensuring that we construct a testing organisation that
has flexibility in scope of work, but strength in skill and
self-organisation that allows appropriate response
when new testing demands emerge. They must possess
the skills to pick up new testing needs that arise through
the testing of each piece and have the autonomy to
develop skills and bring in tools as they need them.
Sometimes it will be the case that the needs cannot be
met within the existing teams, and that new skills must
be brought into the organisation through external
recruitment, consultancy or training. Timing is critical
here as priorities change fast. Consequently test strategy
is not just about tackling the known challenges of the
current developments and backlogs, but also
maintaining awareness of possible future developments
and aiming to prepare the team for those in advance of
them hitting the sprint backlog or "Todo" column. The
test strategist needs to maintain visibility of Product
Roadmaps and co-ordinate research and development
of skills and technologies ahead of time.
An Understanding Of Risk
An important element of test strategy is to establish
parameters around the testing to establish some
consistency in the level of testing performed. These
parameters will include * The number of people devoted
to testing * An expectation of the time devoted to testing
in each sprint or story * An expectation around the
various levels of automated checking * An expectation

www.TestingCircus.com

April 2016

- 30 -

around the levels of testing such as accessibility,


performance, security, disaster recovery
inherent in these established parameters is an accepted
level of risk that the company is prepared to take in the
development. This risk will depend on many factors
including the
nature and
purpose
of
the software,
the relative
costs
and
profit of the
business, and
the
risk
appetite
of
the business
decision
makers. Most
testers
are
well aware
that
lifecritical
software had
to
be
developed at
a significantly lower level of risk than social media apps,
yet in many cases the level of risk acceptable in Agile
software testing is implied rather than agreed.
Establishing the acceptable level of risk in each
development can be something of an iterative process,
learning from feedback that development has 'taken too
long', or that the cost of not testing was too high. One
thing that is essential to establish an understanding of
risk is for teams during reviews such as sprint reviews,
provide the Product Ownership with an appraisal of the
testing performed and the corresponding risks with
each development.
One common problem of the tester's existence is that the
management will have an expectation of a low number
of bugs in the software, irrespective of the level of risk
that they are talking during development. The
conversations around acceptable risk can be difficult in
this situation. A useful technique that I have used in the
past drive the conversation is to create a 'testability map'
and discuss this with the business decision makers. I've
used a mind map, however other visual representations
could work as well, the importance here being the
ability to represent the levels of testability around areas

of a system, and consequential risk if the testability is


low. Testability in this regard need not be limited to
intrinsic testability in the product itself, but can also
consider the capabilities of the team to test certain
characteristics, such as technology, skill or simply time.

Fig 3. A Testability Map to discuss Risk


The conversation around acceptable risk is a
challenging one, but a critical one if an appropriate
strategy around establishing an expected level of testing
is to be implemented.
Preparation Over Planning
I believe that test strategy is as important in an Agile
context as it has been in any other. The Agile test
strategist is not looking to long term planning though.
Co-ordinating testing activities in Agile software
development is no longer about planning, because
things change too quickly for such plans to be useful.
Instead test strategy is about preparation. All of the
approaches and considerations I've discussed here
relate not to prescribing the activities that are to be
performed, but to preparing teams and individuals to
implement a consistent and well understood testing
approach irrespective of the work that is prioritised.
A constantly changing canvas of features need not result
in an absence of test strategy. Through

www.TestingCircus.com

April 2016

- 31 -

Understanding the different levels of


coordination of testing activities in an Agile
process

References
1. 2016 State of Testing Survey

Populating teams with individuals possessing a


complementary range of testing skills

https://www.practitest.com/resources/state-ot-testing2016/?utmsource=colaborators&utmmedium=sharing&
utm_campaign=state%20of%20testing%202016

Establishing an understanding of testability and


acceptable risk with the business decision makers

2. Parable of the stonecutters

Giving testers and teams an understanding of


their responsibilities relevant to the levels of
testing coordination and acceptable risk

Maintaining visibility of potential future needs


and preparing skills and tools in expectation of
these

It is possible to embed a consistent and coherent


strategy into the ongoing operation of the teams in an
organisation, such that the delivery of the strategy does
not require referencing to any documentation, but is
fundamental to the structure and processes that are
instilled in the very fabric of the team.

https://staroversky.com/blog/the-parable-of-the-threestonecutters
3. Exploratory Testing in an Agile Context, Hendrickson
http://www.qualitytree.com/ebooks/et.pdf
4. T-Shaped Tester, Square-Shaped team - Adam Knight
http://thesocialtester.co.uk/t-shaped-tester-squareshaped-team/

About the Author


Adam Knight has been involved in software development for 20 years. He is passionate about software
testing and an active contributor to the testing community. As well as an active blog he is also a regular
speaker at testing events including Agile Testing Days, UKTMF, Next Generation Testing, Testing
Meetups and EUROStar. Adam spent the last 9 years working as Director of Testing and Support for
RainStor, where his approach based on exploratory testing backed by example driven automation was
used as a case study in Gojko Adzic's award winning book Specification by Example. An expert in
data systems, Adam has been involved in the testing of two analytical database products. Adam has a
wealth of experience leading Agile software developments. He is currently enjoying working as an Agile Consultant using his experience in Testing, Support and Product Ownership to help organisations to
improve the quality of their software from inception to delivery through his company Knight Agile
Consulting Ltd.. He hasnt ruled out going back to permanent employment. www.a-sisyphean-task.com
Adam tweets at @adampknight

www.TestingCircus.com

April 2016

- 32 -

www.99tests.com

#TestRelatedAccounts
EuroSTAR Conferences
Official Twitter of EuroSTAR Software Testing Community.
Europe's #1 software testing network.
https://twitter.com/esconfs

SoftwareTestPro
Serving the global software testing & quality assurance community, providing information, education & professional networking.
https://twitter.com/SoftwareTestPro

SoftwareTestingClub
A professional software testing community. Come join us!

https://twitter.com/testingclub

Let'sTest Conference
A Kick-Ass Conference on Context-Driven Testing - for Testers, by
Testers

https://twitter.com/LetsTest_Conf

https://Twitter.com/TestingCircus
www.TestingCircus.com

April 2016

- 34 -

s
ster
e
t
e
wa r
so f t

fo r
ine ting
z
a
ag
tes
e m 010. #
g
a
2
er
ngu
h la ptemb
s
i
l
Eng nce Se
us
ng
i
c
si
d
r
s
a
i
n
e
s l
itio
ngC
i
orld thly ed
t
s
w
a
/Te
s is s. Mon
u
m
c
t
r
o
Ci
sias
er.c
t
ting enthu
s
t
e
i
T
test
://tw
s
and
p
t
ht
s at
u
low
F ol

g
n
i
Test

s
u
c
Cir

O
s
U
w
lo
l
o
#F

r
e
t
t
wi
T
n

r.co
e
t
it
w
T
://
s
p
htt

us
c
r
Ci
g
n
sti
e
T
m/

200+ testers to follow on Twitter https://www.testingcircus.com/testers-in-twitter

www.TestingCircus.com

April 2016

- 35 -

Check out our

wall too.

http://www.facebook.com/TestingCircus

Become our fan https://twitter.com/_sahi


http://www.facebook.com/sahi.software

Request a free demo by sending us an email at


support@sahi.co.in
http://www.sahi.co.in

Tutorial 4

Automation with
Selenium
- Learn Selenium with Mohit Verma
Automating Radio Buttons using Selenium Webdriver
In last tutorial we learned how to automate the dropdowns using Selenium Webdriver. In continuation to automate Web UI, this tutorial will explain how to handle the radio buttons in a web application.
For practice purpose, we have come up with this simple webpage having 3 radio buttons in it. This page displays
as follows:

Following code can be used to create the webpage shown as above and save it as RadioButton.html.
<!DOCTYPE html>
<html>
<body>
<form>
<input type="radio" name="gender" value="Male" checked="Checked"> Male<br>
<input type="radio" name="gender" value="Female"> Female<br>
<input type="radio" name="gender" value="Other"> Other
</form>
</body>
</html>
Scenario to be automated:
1. Launch Firefox browser and Open RadioButton.html page

www.TestingCircus.com

April 2016

- 38 -

2. Select Female radio button from the list


To automate the above scenario we would need to perform following actions:
1. To open the RadioButton page in the browser
2. Locating the Female radio button on the page
3. Click on Female radio button to select
In this example, we are locating the Female Radio button using the XPath technique. As we saw in our 2nd tutorial, Firebug and Firepath can be used to locate the web element.

Code Snippet:
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
public class RadioBtn1 {
/**
* @param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
WebDriver driver = new FirefoxDriver();
String url = "C:\\Users\\Mohit\\Desktop\\Selenium\\Article 4\\RadioButton.html";
//Open RadioButton Html page
driver.get(url);
//Locate and select "Female" radio button
driver.findElement(By.xpath("html/body/form/input[2]")).click();
}
}

www.TestingCircus.com

April 2016

- 39 -

Selecting the Radio Button by Value: Imagine a scenario, where the Xpath for radio button is changing with
every new build and you cant use it for automation. In such case, you would like to locate the radio button with
an attribute with consistent value such as value of radio button. In the following code snippet, we would select
the Female radio button by value.
Code Snippet:
import java.util.List;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;

public class RadioBtn3 {


/**
* @param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
WebDriver driver = new FirefoxDriver();
String url = "C:\\Users\\Mohit\\Desktop\\Selenium\\Article 4\\RadioButton.html";
//Open the URL
driver.get(url);
//List all radio buttons of gender group
List<WebElement> rdobtn = driver.findElements(By.name("gender"));
//Select radio button by value "Female"
for(WebElement radio:rdobtn){
String strValue=radio.getAttribute("value");
if (strValue.equals("Female"))
radio.click();
}
}
}

In few scenarios, you might need to find the text of selected radio button. The following scenario prints the value
of selected option
Scenario to be automated:
1. Launch the Firefox browser and open RadioButton page
2. Select the Other option
3. Print the value of selected option
4. Close the browser
To automate the above scenario we would need to perform following actions:

www.TestingCircus.com

April 2016

- 40 -

1. Open the RadioButton Html page


2. Locating the Other radio button on the page.
3. Click on Other radio button to select.
4. Traverse through the radio button list to find the selected radio button
5. Print the value of selected radio button
6. Close the browser
In this example, we have selected the radio button based on index.
Code Snippet:
import java.util.List;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;

public class RadioBtn2 {


/**
* @param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
WebDriver driver = new FirefoxDriver();
String url = "C:\\Users\\Mohit\\Desktop\\Selenium\\Article 4\\RadioButton.html";
//Open the URL
driver.get(url);
//Select "Other" radio button
List<WebElement> rdobtn = driver.findElements(By.name("gender"));
rdobtn.get(2).click();
//Print the value of selected option
for(int i=0;i<rdobtn.size();i++){
if(rdobtn.get(i).isSelected()) {
System.out.println(rdobtn.get(i).getAttribute("value"));
}
}
//Close the Web Browser
driver.close();
//Terminate the Program
System.exit(0);
}
}

www.TestingCircus.com

April 2016

- 41 -

Summary: Radio buttons are commonly used in the applications and they are simple to automate. This tutorial
explained how radio buttons can be automated using Selenium Webdriver. We have covered most common scenarios associated with usage of radio buttons, the readers shall practice to automate the other scenarios.

About the Author


Mohit Verma has 8 years of experience in Software Testing. He has experience testing projects in
Insurance, Health, Retail, Social Media, ERP and many other domains which include Fortune 500 clients
and Government agencies.
Mohit holds Master degree in computer applications. He is currently working as Project Lead with SPAN
Infotech (EVRY India), Bangalore. He can be contacted at mohitrajanverma@gmail.com or follow him at
Twitter @MohitRajanVerma

www.TestingCircus.com

April 2016

- 42 -

Is Performance
Testing just a
Testing?
Performance testing is an interesting area based on two
larger and rather independent disciplines: performance
analysis and testing. The statement sounds rather trivial
and it can be easily induced from the name - but actually
has important consequences that are often not well
understood and sometimes lead to serious issues if not
factored in. Performance component requires such a
different set of skills and knowledge, maybe even
somewhat different way of thinking, so it makes rather
misleading to approach performance testing just as a
variation of testing.
Let us first define terminology as it is rather vague.
There are different (and sometimes conflicting)
definitions of almost every term here. I'd suggest to
consider every testing involving measurement of
performance in any way to be performance testing. And
every testing requiring application of a multi-user,
synthetic load to be load testing. Many different names
may be used for multi-user testing (such as concurrency,
stress, scalability, endurance, longevity, soak, stability,
or reliability testing) depending on test goals and
design, but in most cases they use the same approach:
applying multi-user, synthetic workloads to the system.
So the term "load testing" is used here for all multi-user
performance testing, which is the main subject of this
article and differs most from functional testing
(comparing, for example, with single-user performance
testing).
This article discusses load testing and highlights points
that are often missed by people moving into
performance testing from functional testing or
development. Applying the best practices and metrics of
functional testing to load testing quite often results in
disappointments, unrealistic expectations, sub-optimal
test planning and design, and misleading results.
I got interested in the topic attending the first Workshop
on Performance and Reliability (WOPR) back in 2003
and was elaborating it since using periodic
opportunities to discuss the subject with experienced

- Alexander Podelko

testers. The discussed points are not absolutes, they still


may be found in functional testing in one way or
another, but have rather different focus and stress.
While people who were involved in load testing or
performance analysis may find many statements below
to be trivial, it still surprising how often these points are
missed.
Load Testing Process Overview
A typical load testing process is shown in figure 1. It is,
of course, rather simplified and probably requires
updates to adjust to modern industry trends - but that is

a subject of another discussion and for that article we


consider the traditional view.
Two different steps are explicitly specified here: define
load and create test assets. The define load step
(sometimes referred to as workload characterization or
workload modeling) is a logical description of the load
we want to apply (like that group of users login,

www.TestingCircus.com

April 2016

- 43 -

navigate to a random item in the catalog, add it to the


shopping cart, pay, and logout with average 10 seconds
think time between actions). The create test assets
step is the implementation of this workload, and
conversion of the logical description into something that
will physically create that load during the run tests
step. While for manual testing that can be just the
description given to each tester, usually it is something
else in load testing a program or a script.
Quite often load testing goes hand-in-hand with tuning,
diagnostics, and capacity planning. It is actually
represented by the back look on the fig.1: if we don't
meet our goal, we need optimize the system to improve
performance. Usually the load testing process implies
tuning and modification of the system to achieve the
goals.
Load testing is not a one-time procedure. It spans
through the whole system development life cycle. It
may start from technology or prototype scalability
evaluation, continue through component / unit
performance testing into system performance testing
before deployment and follow up in production (to
troubleshooting performance issues and test upgrades /
load increases).
What to Test
Even in functional testing we have a potentially
unlimited number of test cases and the art of testing is
to choose a limited set of test cases that should check the
product functionality in the best way with given
resource limitations. It is much worse with load testing.
Each user can follow a different scenario (a sequence of
functional steps), and even the sequence of steps of one
user versus the steps of another user could affect the
results significantly.
Load testing cant be comprehensive. Several scenarios
(use cases, test cases) should be chosen. Usually they are
the most typical scenarios, the ones that most users are
likely to follow. It is a good idea to identify several
classes of users for example, administrators, operators,
users, or analysts. It is simpler to identify typical
scenarios for a particular class of users. With that
approach, rare use cases are ignored. For example,
many administrator-type activities can be omitted as far
as there are few of them compared with other activities.
Another important criterion is risk. If a rare activity
presents a major inherent risk, it can be a good idea to
add it to the scenarios to test. For example, if database
backups can significantly affect performance and

should be done in parallel with regular work, it makes


sense to include a backup scenario in performance
testing.
Code coverage usually doesnt make much sense in load
testing. It is important to know what parts of code are
being processed in parallel by different users, not that
particular code path was executed. Perhaps it is possible
to speak about component coverage, making sure that
all important components of the system are involved in
performance testing. For example, if different
components are responsible for printing HTML and
PDF reports, it is a good idea to add both kinds of
printing to testing scenarios.
Requirements
In addition to functional requirements (which are still
valid for performance testing - the system still should do
everything it is designed to do under load), there are
other classes of requirements:
Response times - how fast the system handles
individual requests or what a real user would experience
Throughput - how many requests the system can
handle
Concurrency - how many users or threads can work
simultaneously
All of them are important. Good throughput with long
response times often is as unacceptable as good
response times, but just for a few users.
Acceptable response times should be defined in each
particular case. A response time of 30 minutes may be
excellent for a big batch job, but it is absolutely
unacceptable for accessing a Web page for an online
store. Although it is often difficult to draw the line here,
it is rather a usability or common sense decision. Keep
in mind that for multi-user testing we get multiple
response times for each transaction, so we need to use
some aggregate values like averages or percentiles (for
example, 90% of response times are less than this value).
Throughput defines the load on the system.
Unfortunately, quite often the number of users
(concurrency) is used to define the load for interactive
systems instead of throughput. Partially because that
number is often easier to find, partially because it is the
way how load testing tools define load. Without
defining what each user is doing and how intensely (i.e.
throughput for one user), the number of users is not a
good measure of load. For example, if there are 500
users running short queries each minute, we have
throughput of 30,000 queries per hour. If the same 500

www.TestingCircus.com

April 2016

- 44 -

users are running the same queries, but one per hour,
the throughput is 500 queries per hour. So with the same
500 users we have a 60-fold difference between loads
and, probably, at least 60-fold difference in hardware
needed.
The intensity of load can be controlled by adding delays
(often referred as think time) between actions in
scripts or harness code. So one approach is to start with
the total throughput the system should handle, then
find the number of concurrent users, get the number of
transactions per user for the test, and then try to set
think times to ensure the proper number of transactions
per user.
Finding the number of concurrent users for a new
system can be tricky too. Usually information about real
usage of similar systems can help to make an initial
estimate. Another approach may be to assume what
share of named (registered in the system) users are
active (logged on). So if that share is 10%, 1,000 named
users results in 100 active users. These numbers, of
course, depend greatly on the nature of the system.

An important part of load testing is workload


verification. We should be sure that the applied
workload is doing what it is supposed to do and that all
errors are caught and logged. The problem is that in
load testing we work on the protocol or API level and
often don't have any visual clues that something doesn't
work properly. Workload can be verified directly (by
analyzing server responses) or, in cases where this is
impossible, indirectly (for example, by analyzing the
application log or database for the existence of
particular entries).
Many tools provide some ways to verify workload and
check errors, but you need understanding what exactly
is happening. Many tools report only HTTP errors for
Web scripts by default (such as 500 Internal Server
Error). If we rely on the default diagnostics, we could
still believe that everything is going well when we are
actually getting out of memory errors instead of the
requested reports. To catch such errors, we should add
special commands to our script to check the content of
HTML pages returned by the server.

Workload Implementation
If we work with a new system and have never run a load
test against it before, the first question is how to create
load. Are we going to generate it manually, use a load
testing tool, or create a test harness?
Manual testing could sometimes work if we want to
simulate a small number of users. However, even if it is
well organized, manual testing will introduce some
variation in each test, making the test difficult to
reproduce. Workload implementation using a tool
(software or hardware) is quite straightforward when
the system has a pure HTML interface, but even if there
is an applet on the client side, it may become a serious
research task, not to mention dealing with proprietary
protocols. Creating a test harness requires more
knowledge about the system (for example, an API) and
some programming skills. Each choice requires different
skills, resources, and investments. Therefore, when
starting a new load-testing project, the first thing to do
is to decide how the workload will be implemented and
to check that this way will really work. After we decide
how to create the workload, we need to find a way to
verify that the workload is really being applied.

The Effect of Data


The size and structure of data could affect load test
results drastically. Using a small sample set of data for
performance tests is an easy way to get misleading
results. It is very difficult to predict how much the data
size affects performance before real testing. The closer
the test data is to production data, the more reliable are
test results.
Running multiple users hitting the same set of data (for
example, playback of an automatically created script
without proper modifications) is an easy way to get
misleading results. This data could be completely
cached, and we will get much better results than in
production. Or it could cause concurrency issues, and
we will get much worse results than in production. So
scripts and test harnesses usually should be
parameterized (fixed or recorded data should be
replaced with values from a list of possible choices) so
that each user uses a proper set of data. The term
proper here means different enough to avoid
problems with caching and concurrency, which is
specific for the system, data, and test requirements.
Another easy trap with data is adding new data during
the tests without sufficient considerations. Each new
test will create additional data, so each test would be
done with a different amount of data. One way of
running such tests is to restore the system to the original

Workload Verification
Unfortunately, an absence of error messages during a
load test does not mean that the system works correctly.

www.TestingCircus.com

April 2016

- 45 -

state after each test or group of tests. Or additional tests


can be performed to prove that a change of data volume
inside a specific range does not change the outcome of
that particular test.
Exploring the System
At the beginning of a new project, it is a good practice to
run some tests to figure out how the system behaves
before creating formal plans. If no performance tests
have been run, there is no way to predict how many
users the system can support and how each scenario
will affect overall performance. Modeling can help here
to find the potential limits, but a bug in the code or an
environmental issue can dwarf scalability.
It is good to check that we do not have any functional
problems. Is it possible to run all requested scenarios
manually? Are there any performance issues with just
one or with several users? Are there enough computer
resources to support the requested scenarios? If we have
a functional or performance problem with one user,
usually it should be fixed before starting performance
testing with that scenario.
Even if there are extensive plans for performance
testing, an iterative approach will fit better here. As soon
as a new script is ready, run it. This will help to
understand how well the system can handle a specific
load. The results we get can help to improve plans and
discover many issues early. By running tests we are
learning the system and may find out that the original
ideas about the system were not completely correct. A
waterfall approach, when all scripts are created before
running any multi-user test, is dangerous here: issues
would be discovered later and a lot of work may need
to be redone.
Assumed Activities
Usually when people talk about performance testing,
they do not separate it from tuning, diagnostics, or
capacity planning. Pure performance testing is
possible only in rare cases when the system and all
optimal settings are well known. Usually some tuning
activities are necessary at the beginning of testing to be
sure that the system is properly tuned and the results
will be meaningful. In most cases, if a performance or
reliability problem is found, it should be diagnosed
further until it becomes clear how to handle it. Generally
speaking, performance testing, tuning, diagnostics, and
capacity planning are quite different processes, and
excluding any of them from the test plan if they are

assumed will make the plan unrealistic from the


beginning.
Test Environment
Conducting functional testing in virtualized and cloud
environments is quite typical and has many advantages.
While many companies promote load testing in the
cloud (or from the cloud), it makes sense only for certain
types of load testing. For example, it should work fine if
we want to test how many users the system supports,
would it crash under load of X users, how many servers
we need to support Y users, etc., but are not too
concerned with exact numbers or variability of results
(or even want to see some real-life variability).
However, it doesn't quite work for performance
optimization, when we make a change in the system and
want to see how it impacts performance. Testing in a
virtualized or cloud environment with other tenants
intrinsically has some results variability as far as we
don't control other activities and, in the cloud, usually
don't even know the exact hardware configuration.
So when we talk about performance optimization, we
still need an isolated lab in most cases. And, if the target
environment for the system is a cloud, it probably
should be an isolated private cloud with hardware and
software infrastructure similar to the target cloud. And
we need monitoring access to underlying hardware to
see how the system maps to the hardware resources and
if it works as expected (for example, testing scaling out
or evaluating impacts to/from other tenants - which
probably should be one more kind of performance
testing to do).
Time Considerations
Performance tests usually take more time than
functional tests. Usually we are interested in the steady
mode during load testing. It means that all users need to
log in and work for some time to be sure that we see a
stable pattern of performance and resource utilization.
Measuring performance during transition periods can
be misleading. The more users we simulate, the more
time we will usually need to get into the steady mode.
Moreover, some kinds of testing (reliability, for
example) can require a significant amount of time
from several hours to several days or even weeks.
Therefore, the number of tests that can be run per day is
limited. It is especially important to consider this during
tuning or diagnostics, when the number of iterations is
unknown and can be large. It even more important if we

www.TestingCircus.com

April 2016

- 46 -

want to include performance testing into continuous


integration as a good trade-off between a number of
tests to include and time to deliver is rather difficult to
achieve.
Simulating real users requires time, especially if it isnt
just repeating actions like entering orders, but a process
when some actions follow others. We cant just squeeze
several days of regular work into fifteen minutes for
each user. This will not be an accurate representation of
the real load. It should be a slice of work, not a squeeze.
In some cases we can make the load from each user
more intensive and respectively decrease the number of
users to keep the total volume of work (the throughput)
the same. For example, we can simulate 100 users
running a small report every five minutes instead of 300
users running that report every fifteen minutes. In this
case, we can speak about the ratio of simulated users to
real users (1:3 for that example). This is especially useful
when we need to perform a lot of tests during the tuning
of the system or trying to diagnose a problem to see the
results of changes quickly. Quite often that approach is
used when there are license limitations.
Still squeezing should only be used in addition to
full-scale simulation, not instead of it. Each user
consumes additional resources for connections, threads,
caches, etc. The exact impact depends on the system
implementation, so simulation of 100 users running a
small report every ten minutes doesnt guarantee that
the system will support 600 users running that report
every hour. Moreover, tuning for 600 users may differ
significantly from tuning for 100 users. The higher the
ratio between simulated and real users, the more is the
need to run a test with all users to be sure that the
system supports that number of users and that the
system is properly tuned.
Testing Process
Three specific characteristics of load testing affect the
testing process and often require closer work with
development to fix problems than functional testing
does. First, a reliability or performance problem often
blocks further performance testing until the problem is
fixed or a workaround is found. Second, usually the full
setup, which often is very sophisticated, should be used
to reproduce the problem. However, keeping the full
setup for a long time can be expensive or even
impossible. Third, debugging performance problems is
a sophisticated diagnostic process that usually requires

close collaboration between a performance engineer


running tests and analyzing the results and a developer
profiling and altering code. Special tools may be
needed: many tools, such as regular debuggers and
profilers, work fine in a single-user environment, but do
not work in the multi-user environment due to huge
performance overheads.
These three characteristics make it difficult to use an
asynchronous process in load testing (which is often
used in functional testing: testers look for bugs and log
them into a defect tracking system, and then the defects
are prioritized and independently fixed by
development). What is often required is the
synchronized work of performance engineering and
development to fix the problems and complete
performance testing.
A Systematic Approach to Changes
The tuning and diagnostic processes consist of making
changes in the system and evaluating their impact on
performance (or problems). It is very important to take
a systematic approach to these changes. This could be,
for example, the traditional approach of one change at
a time (sometimes referred as one factor at a time, or
OFAT) or using the design of experiments (DOE)
theory. One change at a time here does not imply
changing only one variable; it can mean changing
several related variables to check a particular
hypothesis.
When I wrote it first time, I just wrote one change at a
time. I was lucky to get James Bach to look at one of
early versions of this article. To my surprise the only
point that he objected was this one. I guess I understand
why now - many functional testing techniques are based
on the idea to decrease the number of test cases by
changing many factors at once in a certain way. But it is
not so straightforward in performance testing, so this
point underlines a very important difference between
performance and functional testing. I relaxed wording
here, one change at a time is too restrictive if
understand it literally, but it is a very important point to
understand.
The relationship between changes in the system
parameters and changes in the product behavior is
usually quite complex. Any assumption based on
common sense could be wrong. A systems reaction
under heavy load could differ drastically of what was
expected. So changing several things at once without a
systematic approach will not give the understanding of

www.TestingCircus.com

April 2016

- 47 -

how each change affects results. This could mess up the


testing process and lead to incorrect conclusions. All
changes and their impacts should be logged to allow
rollback and further analysis.
Result Analysis
Load testing results bring much more information than
just passed/failed. Even if we do not need to tune the
system or diagnose a problem, we usually should
consider not only transaction response times for all
different transactions (usually using aggregating
metrics such as average response times or percentiles),
but also other metrics such as resource utilization. The
systems used to log functional testing results usually
don't have much to log all this information related to
load testing results.
Result analysis of load testing for enterprise-level
systems can be quite difficult and should be based on a
good knowledge of the system and its performance
requirements, and it should involve all possible sources
of information: measured metrics, results of monitoring
during the test, all available logs and profiling results (if
available). We need information not only for all
components of the system under test, but also for the
load generation environment. For example, a heavy
load on load generator machines can completely skew

results, and the only way to know that is to monitor


those machines.
There is always a variation in results of multi-user tests
due to minor differences in the test environment. If the
difference is large, it makes sense to analyze why and
adjust tests accordingly. For example, restart the
program, or even reboot the system, before each test to
eliminate caching effects.
Summary
To wrap up, there are serious differences in processes
and required skills between load and functional testing.
Some of them were discussed in this article, but these
are rather examples than a comprehensive list. It is
important to understand that while load and functional
testing both are testing and indeed share a lot of notions
and terminology, using the same approaches without
consideration of these differences may result in
unrealistic expectations, sub-optimal test planning and
design, and misleading results. The word "performance"
in "performance testing" does not just identify another
kind of testing, it makes it also a part of another
discipline - performance engineering - with its own vast
set of knowledge, skills, and techniques quite different
from functional testing.

About the Author


Alex Podelko has specialized in performance since 1997, working as a performance engineer and
architect for several companies. Currently he is a Consulting Member of the Technical Staff at Oracle,
responsible for performance testing and optimization of Enterprise Performance Management and
Business Intelligence (a.k.a. Hyperion) products.
Alex periodically talks and writes about performance-related topics, advocating tearing down silo walls
between different groups of performance professionals. His collection of performance-related links and
documents (including his recent papers and presentations) can be found at www.alexanderpodelko.com.
He blogs at http://alexanderpodelko.com/blog and can be found on Twitter as @apodelko. Alex currently
serves as a director for the Computer Measurement Group (CMG, http://cmg.org), an organization of
performance and capacity planning professionals.

www.TestingCircus.com

April 2016

- 48 -

e
b
i
r
c
s
b
u
To S
e
r
e
H
k
c
i
l
C

Testing Circus
www.testingcircus.com

Still relying on
reading
Testing Circus
from tweets
& facebook
updates?
Subscribe
with your
email id and
get the
magazine
delivered to
your email
every month,
free!

www.TalentPlusPlus.com

Kick Start Your Career

www.TalentPlusPlus.com

Career classes in Gurgaon


and online classes
worldwide.

Das könnte Ihnen auch gefallen