Beruflich Dokumente
Kultur Dokumente
Any Test Automation program must be treated like the Software Engineering program that it is. It cannot be staffed with the most junior of employees with little design and architecture experience. This document is a superset of good Software Engineering Practices. Any test automation code must follow basic coding and engineering standards. Test Automation has an additional set of best practices to make for better tests. Test automation paradigms such as Capture Playback inherently are just a series of commands in a script, even when augmented with some light logic and feedback and thus are not considered in an infrastructure best practices document. This document assumes Engineers are implementing a test automation framework which requires design and implementation guidelines to make for better automation.
QTP also includes practices such as Descriptive Programming which allow engineers to parameterize the objects and not use the hard coded object repository. Unfortunately, most of the standard QTP training is focused on the repository and not making true dynamic tests. A well rounded Automation Engineer will choose the descriptive programming technique over the object repository technique because of the maintainability and portability implications. Someone who learns automation through a tool will choose the technique the tool vendor recommends.
Keyword Libraries
A keyword library is a set of test verbs that performs the actions required for the test case. In figure 1 above, these can be actions in Level 2, or they can be sub-actions in Levels 3 through 5. These test verbs
abstract all of the processing and computer code from the test script. The script writer bolts together these keywords to form a test script. Generally Level 2 and Level 3 are methods intended for script use; Level 3 through Level 5 are methods a script can call but are intended to be helper functions. The library functions will either be self-contained or they may call other methods and libraries to perform their work. Not all tools will support libraries and keywords. This best practice should be used on all tools that support it.
Shared Components
One trait of a well architected software system is the use of shared components. If processing is repeated across multiple internal functions or test keywords, that processing should be abstracted into a shared component. If different tools can be extended in the same language or can import a shared DLL, a common set of functions or single DLL should be used.
Function Naming
When creating an automation framework, the functions and methods are actually callable functions by scripts and higher-level layers of the automation stack. Their names need to be chosen wisely. The best practice is to make the function name both descriptive and action-based. Some commands might be intended to be used in script; others intended to be helper functions. Use the following template: <Action><Object><(Optional) Descriptor> The optional descriptor could be a description of how the method does its work. There are some Action name conventions: Get Set Load Log Check Verify For example: VoteLevel10() VoteLevel10byWebService() VoteLevel10byBroadband() Lookup and return data or the state of the application under test Change a setting in a data store or change a setting of the application under test Load data or information from an external data source Record some kind of information to a log Perform a check and return the result Perform a check, and invoke error handling based on result
(a generic voting function) (specific to voting through the Web Service) (forces the tool to open a browser and navigate to Level 10)
For types of verification required such as verifying a web object exists, use a common verification function instead of a tools native verification technique. Consider a VerifyWebElement (tag,value) function. This modularizes all web element verifications. Wrapper functions for tags could be written such as VerifyWebElementbyName(name) or VerifyWebElementExists(true). If the object tagging paradigm changes, each individual verification does not necessarily have to be changed as long as the value to check is constant. The VerifyWebElement function would be modified so the value looks for the new tag type. Moving all of the core processing to a single worker function makes the automation library modular and more maintainable. If there is a change required or a defect in the automation infrastructure, it only has to be made or edited once.
Error Handling
Error handling with respect to an automation framework goes above and beyond basic software engineering error handling within a method to handle system faults. That should be part of any well written piece of code, automation or not. Automation framework error handling deals with errors related to the test itself. When testing software applications that run in a variety of environments such as web pages, set top boxes, cell phones, and web services a number of faults can happen in the test. Some may be genuine faults in the software under test. Some may be because of the interface between the tool and the application. Others may be faults in the automation software itself. A well designated automation infrastructure must include error handling to catch routine faults and anomalies in the system. The automated test must be able to execute unattended and must have accurate reporting on what failed when there is a failure.
Error Codes
Within a multi-tiered infrastructure, there will be low level methods that interact with the Application Under Test (AUT). These can be control methods and these can be feedback methods. Mid-level functions will use these methods to perform a task that needs to take an action then verify the action succeeded. The methods need a standard way to report faults to each other. A best practice for automation is to create a standard list of error codes that all functions use. Action functions can return a simple integer. Get functions that are designed to return a value or string will have to return the error within the string with an appropriate template. This might mean a negative number or string starting with a specific character is an error code. Again, an appropriate model must be designed that meets the limitations of any given tool. If the low level method to get the feedback cannot verify the action, it will send an appropriate error code to the calling method. Some tools might allow more descriptive error messages, and some might only allow an integer or simple string. The appropriate paradigm must be chosen for the tool.
The calling method will then take appropriate action based on that error. It might employ retry logic as described below. It might simply pass the error back up. Or it might cease execution of the script. Functions and methods will need to be designed with appropriate handling lo
Retry Logic
Retry logic is concerned with how a script method handles an error in the feedback and how it can retry actions without failing the step or the method. For example, when attempting to tune a set top box using a TuneSTB() function to channel 123, the STB might have dropped the 2 key and tuned to channel 13. This is a frequent occurrence in many older set top models. To verify the tune succeeded, the TuneSTB() function might call the CheckSTBChannel() method, which would then come back with an error. A best practice is to give that tuning function retry logic so that it attempts it again before reporting a tuning failure. A better practice is to make the number of retry attempts configurable from the test tool. If the function performs the function with the allowed number of retries, it would then report the failure to the calling method or script. There will be script steps and infrastructure methods that cannot include retry logic because the test is whether it can succeed the first time. In that case create two versions of the functions, possible with one acting as a wrapper around the second.
different action based on the section of the script that called the failing function. In all sections, the failure should be notes in the logs, but actions vary. In the Setup / Base State section, any script failure should cease script execution and result in a Can Not Test (CNT) result. In the Action section, any script failure should cease script execution and result in a Failure result. In the Verification section, any script failure should continue execution to the next step and result in a Failure result. In the Cleanup section, and script failure should continue execution and not affect the script result. The automation framework will need to be implemented so the tool can take the appropriate steps automatically without the need for logic in the script itself.
Results Reporting
Any automated test would be meaningless without appropriate output that Engineers can review to determine if the test suite passed or failed. The logs need to be designed so the Engineer can make this decision quickly and spend a minimal amount of time sifting through data. Standard best practices are to use different levels of logging.
Result Log
The result log is a very high-level summary of each script within a test suite. It will include the overall script s status as well as individual script step results. It will not include every process and action taken under the hood . Result logs are often in elegant output such as XML and HTTP to make it easier for the Engineer and Management to understand the overall results.
Abstracting Data
To the greatest extent possible, data used by the test should be abstracted from the test scripts. This can include a number of data genomes such as environment data, application string tables, timeouts, etc Scripts should have a minimum amount of any hard coded numbers and strings. These should be abstracted into data and configuration files. If the automated test is a data-driven test, the strings can be part of each cases inputs and outputs. For each type of data, keywords will need to be created such as GetEnvironmentURL or LoadGlobalData. These can be loaded into global variables or the script can redirect them to script variables. By abstracting as much data and information as possible, scripts become easier to maintain and update.
Environment Switching
Most organizations use many environments throughout the SDLC of their software: Dev, QA, Staging, Production, etc They want to use the same test script regardless of the environment being used. A best practice in a test automation framework is to make the environment switchable or abstracted into a separate data file. The script should not have the environment URL hard coded into the script. If properties change based on the environment such as a timeout or database pointed to, that information should also be abstracted out and read in during initialization. For web based applications this is the core URL. For web service based applications, this is the endpoint.
but highly dangerous and should be used only when no other option is available. For example, is the code instrumented to output the STB state through a data port that could be used instead of the black screen?