Sie sind auf Seite 1von 4

Best Practices for Testing in Offshore Agile Projects

by Peter Vaihansky and Anna Obuhova Aug 08, 2007 At our company, Exigen Services (formerly StarSoft Development Labs), we have been doing offshore XP and Scrum development for over five years. In this time, we have tried various techniques and team configurations, including many approaches to testing. In this short article, we will share some of those practices and methods. Some of them, like a reusable framework for automated testing, came out of many months (and sometimes years) of effort to improve the quality of testing on our projects. Others, like adding negative scenarios and manual testing to the automated test suits, were simply the result of applying common sense, the tester's experience, and intuition to the situation at hand. Dedicated Testers on Agile Teams The "classical" XP literature does not say much on the subject of having dedicated testers on XP teams and their particular role. Contrary to what some hardcore XPers believe, our experience shows that XP teams benefit tremendously from having dedicated testing professionals. Experienced testers add value in many ways, including manual exploratory testing, working with the customer to produce more consistent requirements, and exploring new ways to improve the automated testing. Depending on the size of the team, we sometimes have two or three testers on a team. From the accountability standpoint, testers do not answer to the Tech Lead; the Test Lead and Tech Lead on the project are peers. There is a lot of face-to-face communication between testers and developers daily.I It is great to be able to simply walk over to another person's desk so he can show you the issue on his monitor. What Can Be Different in Offshore Projects When the customer and the team are separated by a distance, certain issues invariably arise. Communication starts taking more time, and you run the risk of creating too much overhead --exactly what you are trying to avoid by being Agile! Here are some of the best practices we have come up with to help testers contribute to the success of Agile teams. Stories and story tests are reviewed by testers At the start of each iteration, we ask our customer to send us the stories and story tests one to two days before the scheduled Planning Game. Our experienced testers use this time to review the stories and story tests to make sure stories are consistent and testable, and go back to the customer with questions if they find any inconsistencies between the story description and how the customer suggests testing the story. This added step has been tremendously valuable in creating consistent requirements and has obviously helped make estimations more realistic.

Environment issues are anticipated It is obvious that you should try to execute tests in the customer's environment whenever possible. This way, the customers get to try the system in their "real world" environment, and the project team gets to learn about the differences in the environments. (No matter how hard we try, it is not always possible to exactly replicate the customer's environment offshore.) However, sometimes for reasons of security, accessibility, or other factors, this is not feasible, and the system that ran perfectly well in our environment breaks when the customer runs it in his. What results is a flood of records in the bug tracking system that relate to this particular group of issues. This can easily snowball into a monstrous communication overhead. To mitigate this, as we go through iterations on a project, we generate a list of typical environment-related issues and publish it to the customer. A typical item on the list would look like this: "If you get error message x, then check setting y, and if that is OK, also check setting z." So instead of logging numerous similar issues into the bug tracker (which would demand our response and cause colossal waste of time), the client can resolve these quickly by using our checklist. Acceptance testing is comprehensive On a number of our projects, our customers provide us with stories and story tests. However, the latter only contain positive scenarios. The other problem is that although many changes are typically made during an iteration, story tests are rarely updated by the customer. So our testers make sure that when they produce the automated acceptance tests, they accommodate the changes that were introduced mid-iteration, and also add other tests that their intuition tells them will contribute to quality (e.g., negative scenarios). When Things Don't Quite Work: Pitfalls in Offshore Agile Testing Sometimes, a straightforward application of familiar methods or techniques does not necessarily work 100% as expected. Here are some of the issues we have run into with testing in our offshore Agile projects, and what we did to resolve them. Regression test execution takes too long Sometimes, due to the size of the product, it is not practical to run the entire suite of regression tests because it simply takes too long. In one case, 90 sets of tests (200 KLOC) that covered 75% of web and business functionality, took 18 hours to run. Obviously a workaround was needed. To address this, we created a smoke test (execution time 30 minutes) that passed if (a) all pages of a web application would load, and (b) page titles and default buttons were correct. This allowed us to verify only that there were no broken links or pages in the application. To compensate for this limited coverage, we had to add manual testing of new functionality. The full automated regression testing would be run one to two times a week. This selective regression testing generated some additional defects, but the overall negative effect was manageable.

As another solution for the long execution time problem, we wrote an application that would run tests selectively when a certain module was changed. The system ran the test on the refactored module and alerted the testers. There are several factors that affect the execution times. First of all, different tools yield different speeds. For example, Mercury's QuickTest Professional is about 40% quicker than AutomatedQA's TestComplete (for the project in question, we were using version 3), but it is also much more expensive, and the customer doesn't always want to pay the difference. Also, sometimes the tests themselves can be written better and subsequently run faster. To facilitate the continuous learning, we use pair programming for testers when two testers write automated tests together. We have also created coding guidelines for testers to share best testing practices. Automated testing is not enough We've had experiences where our offshore client declared that unit testing and automated acceptance testing should be enough, and that they "were not going to pay for manual testing." In theory that may have sounded OK, but in practice problems surfaced quickly. The client wanted daily delivery, and since test automation is not instantaneous, the early deliveries obviously had defects. That generated a flood of issues in the issue tracking system, which in turn generated a huge communication overhead... It was quickly evident to everyone that the project was failing. We had to go back to the client and explain that manual testing could add value and actually save them money rather than waste it. As soon as the client agreed to give it a try and we were "allowed" to do manual testing, we immediately saw a drastic increase in the product quality. Now on most projects we are able to convey to the client that manual testing is an invaluable tool for testing new functionality and exploratory testing. Bug tracking generates excessive communication The first thing about a bug tracker is to know that you should have one. It is not too important where it is hosted and which one it is, as long as both the client and the offshore development team have a clear procedure on how to use it. The problems arise when multiple people on the customer side enter the same bug into the system. Again, this generates communication overhead as we find ourselves endlessly asking the customer to remove duplicate items from the tracker. So the next thing to do is to work with the customer to establish a single individual on the customer side who will populate the system. Also, sometimes just logging a particular defect into a system is not enough to help the testers understand what exactly the problem is. Agile is powerful because it recognizes the limitations of the written word. What we have done to deal with the ambiguity here is to use desktop sharing tools, whereby the client simply walks us through the sequence of actions that generated an error. A picture is truly worth a thousand words - this technique has saved us vast amounts of time that would otherwise be spent on endless email exchanges or phone calls.

In-house Automated Test Framework Based on the tools you use, you may or may not need an additional framework for automated testing. For example, QuickTest Pro with its approach to keywords is a powerful tool; however, its price may be prohibitive in some situations. TestComplete is much more affordable, but we found that in order to make it fully usable on our projects, we needed to build a framework around it. A reusable framework also made sense because we were using it on a string of similar projects for the same customer. Here are some of the useful things supported by the framework:

Executing batches of tests; if a test fails, the tool will simply move on to the next one. Checks and asserts. Data Storage, special methods for working with databases, and error handling.

Effectively, our framework has provided us with an object-oriented way to work with tests. Since we are working with properties of objects, we can run very complex tests. Also, the tests are now much more maintainable, which is especially important for Agile projects. So when an object is renamed, or a web page is refactored, the automated test only needs to be changed slightly to support the change in code. (In QuickTest Pro, you would probably need to write a new test.) The other benefit is that we can write tests much more quickly using the framework. To sum it up, TestComplete (we used version 3 at the time) with the framework turned out to be almost more powerful than QuickTest Pro!

Other techniques
Checklists for developers For projects with a lot of similar functionality present in different parts of the system (especially in web applications), testers produce a checklist of the most typical defects. This helps developers catch bugs early and saves time on testing later. Pair testing Yes, we actually do pair testing part of the time - both pair programming for automated testing and pair manual testing. It is a very helpful technique for troubleshooting and an invaluable coaching tool. Conclusion Skilled and experienced dedicated testers are an invaluable part of any good Agile team. By applying their expertise, communicating with developers and with the client, and using some of the approaches outlined in this article, they help ensure successful delivery and ultimately, customer satisfaction. In addition, inspecting and adapting are key to agility. The best teams come up with new techniques suited for solving the problem at hand by experimenting and tweaking familiar tools and techniques to come up with the optimal solution.

Das könnte Ihnen auch gefallen