Sie sind auf Seite 1von 9

Introduction

Any Test Automation program must be treated like the Software Engineering program that it is. It cannot be staffed with the most junior of employees with little design and architecture experience. This document is a superset of good Software Engineering Practices. Any test automation code must follow basic coding and engineering standards. Test Automation has an additional set of best practices to make for better tests. Test automation paradigms such as Capture Playback inherently are just a series of commands in a script, even when augmented with some light logic and feedback and thus are not considered in an infrastructure best practices document. This document assumes Engineers are implementing a test automation framework which requires design and implementation guidelines to make for better automation.

Tool Agnostic Practices


Many of the mature test automation tools come with their own set of processes to make them function in the specific manner the tool vendor chooses. Often those practices do not port to other tools and are designed to keep trained engineers locked into their tool. These practices are often tailored to provide for quick results to make a sale of their tool. The resulting scripts are generally capture-playback requiring a lot of individual script maintenance. These out of the box immediate coverage tools and processes generally do not follow best practices. When an organization creates their process around a tool, they are limited to the capabilities of the tool and are bound to the tool vendor. There may be tests never implemented because the tool is not capable of it or the tool vendors practices might not lend itself to it even if the tool is capable of it. Instead create test practices and automation goals that are tool agnostic, that is define capabilities and expectations first then pick one or more tools that will enable those goals. Some of the tools with specific practices and processes can often be used in a generic manner, but if the engineer is trained in automation in only one tool, he will think only in terms of that tool. That same tool can still be adopted and then modified or extended to meet the agnostic automation needs. Consider HP s Quick Test Pro(QTP), formerly part of the Mercury toolset, that is used for testing PC Native and Web applications. This is a mature, industry-leading tool. Because of this, it is very easy to hire people well trained in the tool. Unfortunately many of them only know the QTP way of automating. QTP can work in both a record and playback mode, as well as an advanced mode where users can write their own library of functions. The QTP training is focused on recording user s actions then adding in checkpoints to verify results, and using an object repository to store a data representation of objects on screen. Scripts written using these techniques become extremely fragile as they will fail when one tag in the web page changes.

QTP also includes practices such as Descriptive Programming which allow engineers to parameterize the objects and not use the hard coded object repository. Unfortunately, most of the standard QTP training is focused on the repository and not making true dynamic tests. A well rounded Automation Engineer will choose the descriptive programming technique over the object repository technique because of the maintainability and portability implications. Someone who learns automation through a tool will choose the technique the tool vendor recommends.

Top Down Design


Within software engineering, top down design is the practice where the overall objective / goal of an algorithm or program is broken down and resolved into discrete components. This practice is well documented on the web, including this definition from wiki (http://en.wikipedia.org/wiki/Topdown_and_bottom-up_design): A top-down approach (is also known as step-wise design) is essentially the breaking down of a system to gain insight into its compositional sub-systems. In a top-down approach an overview of the system is formulated, specifying but not detailing any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base elements. A top-down model is often specified with the assistance of "black boxes", these make it easier to manipulate

Figure 1: Top-Down Design Illustration

Keyword Libraries
A keyword library is a set of test verbs that performs the actions required for the test case. In figure 1 above, these can be actions in Level 2, or they can be sub-actions in Levels 3 through 5. These test verbs

abstract all of the processing and computer code from the test script. The script writer bolts together these keywords to form a test script. Generally Level 2 and Level 3 are methods intended for script use; Level 3 through Level 5 are methods a script can call but are intended to be helper functions. The library functions will either be self-contained or they may call other methods and libraries to perform their work. Not all tools will support libraries and keywords. This best practice should be used on all tools that support it.

Shared Components
One trait of a well architected software system is the use of shared components. If processing is repeated across multiple internal functions or test keywords, that processing should be abstracted into a shared component. If different tools can be extended in the same language or can import a shared DLL, a common set of functions or single DLL should be used.

Function Naming
When creating an automation framework, the functions and methods are actually callable functions by scripts and higher-level layers of the automation stack. Their names need to be chosen wisely. The best practice is to make the function name both descriptive and action-based. Some commands might be intended to be used in script; others intended to be helper functions. Use the following template: <Action><Object><(Optional) Descriptor> The optional descriptor could be a description of how the method does its work. There are some Action name conventions: Get Set Load Log Check Verify For example: VoteLevel10() VoteLevel10byWebService() VoteLevel10byBroadband() Lookup and return data or the state of the application under test Change a setting in a data store or change a setting of the application under test Load data or information from an external data source Record some kind of information to a log Perform a check and return the result Perform a check, and invoke error handling based on result

(a generic voting function) (specific to voting through the Web Service) (forces the tool to open a browser and navigate to Level 10)

Waitfortime() LoadGlobalData() VerifyNode() VerifyNodeFromResponse() GetGameContestant() SetTestEnvironment() LogScriptResult()

(script delay until a clock time) (script command to load globals)

Common Scripting Language Across Tools


One aspect of any good automation framework is the concept of a toolkit . That is a series of tools that suit a variety of requirements. A good engineer will then choose the right tool for the right job . Assuming multiple tools use a multi-tiered framework which includes an abstracted scripting language, use common test verbs and method names for the same functionality. For example, there is not one single tool that is optimized for both Internet Explorer and Firefox. QTP is best of breed for IE; Selenium best of breed for Firefox. Both will need to run similar tests but will need to be implemented in different languages (QTP is VB Script based; Selenium other languages). For a common testing task like verifying an image is present, use a name like VerifyImageExists() on both tools infrastructure. The benefit to this is that when other engineers see these functions when using a new tool, they already know what that keyword does. It also enables tool agnostic scripts, where a functional script can be written that can be executed against multiple tools. If a tool has a native function that duplicates functionality of other tools, create a wrapper function around the native function to allow for the script commonality.

Worker and Wrapper Functions


Many groups of functions perform a similar task but have a slight difference to how they operate. For example: a timing library might have methods to wait until a specific clock time, minutes past hour, second past minute, or hour of the day. Each of these methods have at their core a timer to check current time compared to the parameter time. A software engineering best practice is to cull the common code and aggregate into one main worker function, then make the variants of it a wrapper around the main worker function. Consider a WaitUntiTime() method that takes a clock time in HH:MM:SS as a parameter and pauses script execution until this time. This single function would handle all of the logic involved in clock-based time synchronization. Other functions could call this, having minimal logic within them. WaitUntilHour(int hour) would build the time parameter string and call WaitUntilTime(hour:00:00). WaitUntilMinute(int minute) would do the same thing but the parameter would be 00:minute:00. Other functions could be created like WaitUntilTomorrow(), which would call WaitUntilTime(00:00:00).

For types of verification required such as verifying a web object exists, use a common verification function instead of a tools native verification technique. Consider a VerifyWebElement (tag,value) function. This modularizes all web element verifications. Wrapper functions for tags could be written such as VerifyWebElementbyName(name) or VerifyWebElementExists(true). If the object tagging paradigm changes, each individual verification does not necessarily have to be changed as long as the value to check is constant. The VerifyWebElement function would be modified so the value looks for the new tag type. Moving all of the core processing to a single worker function makes the automation library modular and more maintainable. If there is a change required or a defect in the automation infrastructure, it only has to be made or edited once.

Error Handling
Error handling with respect to an automation framework goes above and beyond basic software engineering error handling within a method to handle system faults. That should be part of any well written piece of code, automation or not. Automation framework error handling deals with errors related to the test itself. When testing software applications that run in a variety of environments such as web pages, set top boxes, cell phones, and web services a number of faults can happen in the test. Some may be genuine faults in the software under test. Some may be because of the interface between the tool and the application. Others may be faults in the automation software itself. A well designated automation infrastructure must include error handling to catch routine faults and anomalies in the system. The automated test must be able to execute unattended and must have accurate reporting on what failed when there is a failure.

Error Codes
Within a multi-tiered infrastructure, there will be low level methods that interact with the Application Under Test (AUT). These can be control methods and these can be feedback methods. Mid-level functions will use these methods to perform a task that needs to take an action then verify the action succeeded. The methods need a standard way to report faults to each other. A best practice for automation is to create a standard list of error codes that all functions use. Action functions can return a simple integer. Get functions that are designed to return a value or string will have to return the error within the string with an appropriate template. This might mean a negative number or string starting with a specific character is an error code. Again, an appropriate model must be designed that meets the limitations of any given tool. If the low level method to get the feedback cannot verify the action, it will send an appropriate error code to the calling method. Some tools might allow more descriptive error messages, and some might only allow an integer or simple string. The appropriate paradigm must be chosen for the tool.

The calling method will then take appropriate action based on that error. It might employ retry logic as described below. It might simply pass the error back up. Or it might cease execution of the script. Functions and methods will need to be designed with appropriate handling lo

Retry Logic
Retry logic is concerned with how a script method handles an error in the feedback and how it can retry actions without failing the step or the method. For example, when attempting to tune a set top box using a TuneSTB() function to channel 123, the STB might have dropped the 2 key and tuned to channel 13. This is a frequent occurrence in many older set top models. To verify the tune succeeded, the TuneSTB() function might call the CheckSTBChannel() method, which would then come back with an error. A best practice is to give that tuning function retry logic so that it attempts it again before reporting a tuning failure. A better practice is to make the number of retry attempts configurable from the test tool. If the function performs the function with the allowed number of retries, it would then report the failure to the calling method or script. There will be script steps and infrastructure methods that cannot include retry logic because the test is whether it can succeed the first time. In that case create two versions of the functions, possible with one acting as a wrapper around the second.

Verify vs. Check Methods


One best practice in creating an automation framework is the concept of a Verify and a Check function. This is part of the error handling design. Consider the VerifyWebElement() function described above. The Check function performs the actual analysis and comparison, and returns a result without formally logging a Pass or Fail. Functions that employ retry logic will call the Check function for feedback and take appropriate action. The CheckWebElement() function would do the actual comparison and return a pass / fail status. The Verify function calls the Check function to get the required result. Depending on the result, the Verify function will then engage the error handling processing such as returning the appropriate code to the calling function and/or logging the pass or failure of the Check to the appropriate logs. The VerifyWebElement() function would call CheckWebElement() and take appropriate actions based on the return code. The benefit of this approach is the actual comparison is only performed in one method. The check function can be reused by many other keywords and helper functions without failing a script because a check failed.

Script Level Errors


Ultimately failures and errors may not be recoverable and get reported back to the script level. As documented in the Script Writing Best Practices, a well-designed automation framework will take

different action based on the section of the script that called the failing function. In all sections, the failure should be notes in the logs, but actions vary. In the Setup / Base State section, any script failure should cease script execution and result in a Can Not Test (CNT) result. In the Action section, any script failure should cease script execution and result in a Failure result. In the Verification section, any script failure should continue execution to the next step and result in a Failure result. In the Cleanup section, and script failure should continue execution and not affect the script result. The automation framework will need to be implemented so the tool can take the appropriate steps automatically without the need for logic in the script itself.

Results Reporting
Any automated test would be meaningless without appropriate output that Engineers can review to determine if the test suite passed or failed. The logs need to be designed so the Engineer can make this decision quickly and spend a minimal amount of time sifting through data. Standard best practices are to use different levels of logging.

Result Log
The result log is a very high-level summary of each script within a test suite. It will include the overall script s status as well as individual script step results. It will not include every process and action taken under the hood . Result logs are often in elegant output such as XML and HTTP to make it easier for the Engineer and Management to understand the overall results.

Script / System Log


The script / system log is a mid-level log. It contains script and step results but also includes the functions and methods called and their status. Each entry in the log is time stamped to allow the engineer to quickly find the state of the AUT at the time of the failure(s). The detail in this log can vary depending on if there is a detailed log.

Detailed / Run Log


The detailed / run log is an extremely detailed and low-level log to record every action and intermediate result of the test. This log is usually only used to diagnose a failure of the script or to debug defects in the automation tool itself. Each line is time stamped to correlate actions of the testing tool modules to the state of the AUT.

Abstracting Data
To the greatest extent possible, data used by the test should be abstracted from the test scripts. This can include a number of data genomes such as environment data, application string tables, timeouts, etc Scripts should have a minimum amount of any hard coded numbers and strings. These should be abstracted into data and configuration files. If the automated test is a data-driven test, the strings can be part of each cases inputs and outputs. For each type of data, keywords will need to be created such as GetEnvironmentURL or LoadGlobalData. These can be loaded into global variables or the script can redirect them to script variables. By abstracting as much data and information as possible, scripts become easier to maintain and update.

Environment Switching
Most organizations use many environments throughout the SDLC of their software: Dev, QA, Staging, Production, etc They want to use the same test script regardless of the environment being used. A best practice in a test automation framework is to make the environment switchable or abstracted into a separate data file. The script should not have the environment URL hard coded into the script. If properties change based on the environment such as a timeout or database pointed to, that information should also be abstracted out and read in during initialization. For web based applications this is the core URL. For web service based applications, this is the endpoint.

Assume Failure First


As a philosophical paradigm, automation frameworks should be written to error on the side of caution. Any verifications or checking of functionality should assume the test fails until a positive match is made. This minimized the occurrence of False Positives where the tool reports a success when there was a failure. This event can never happen as it will undermine confidence in any tool and automation process. It might lead to False Failures where the tool reports an issue when the AUT performed normally. This is not good but is allowable. For example, a Check function should set the result to FALSE at initialization. When it finds all data points, it then sets the result to PASS. This is an affirmative verification the test was successful. This design paradigm ensures that any positive result resulted from the tool finding the appropriate result. There may be cases when an affirmative verification cannot be made. For example: if testing that a STB has gone into Standby mode (soft power off), the screen should be black with no audio present and other keys do not make a change. The screen can be black in a number of other conditions, not just standby. Using the black screen for this specific test assumes that the STB is not in any of the other conditions that lead to a black screen. This type of logic is a verification by assumption. It is allowable

but highly dangerous and should be used only when no other option is available. For example, is the code instrumented to output the STB state through a data port that could be used instead of the black screen?

Generic Coding Standards


Any good software infrastructure must have a set of coding standards. Automation too must have an agreed to set. This includes both the infrastructure and the scripts. If there are standards set for software development in the Engineering Department, then automation should follow them to the greatest extent possible. Individual tools and applications might require additional standards because of the limitations of the tool.

Das könnte Ihnen auch gefallen