Beruflich Dokumente
Kultur Dokumente
Revision History
Issue Date 26-May-09 Rev 1.0 Description of Change Initial version Initiated By Pragya Dwivedi
Rev1.0
Table of Contents
TABLE OF CONTENTS.................................................................................................2 1 OVERVIEW....................................................................................................................3 2 SEMI-AUTOMATION FLOW DIAGRAM................................................................3 2.1 SEMI-AUTOMATION FOR TEMPLATE-BASED TESTS .................................................................4 2.2 SEMI-AUTOMATION FLOW FOR NON-TEMPLATE-BASED TESTS.................................................5 3 SEMI-AUTOMATION CODING GUIDELINES.......................................................5 4 SEMI-AUTOMATION CODE REVIEW GUIDELINES..........................................6 5 TEST PROCEDURE GUIDELINES............................................................................6
Page 2 of 7
Rev1.0
1 Overview
Document describes the semi-automation flow QA group (Pune) should follow. It also enlists Semi-Automation Coding Guidelines and Test Procedure Guidelines.
Page 3 of 7
Rev1.0
Yes
No
Yes
Generate QPRs for New support and add support (Auto Owner)
Gnats updated
No
Yes
Automated templates Need change? (Test Owner) No Semi-automate few more tests in module (Test Owner)
Auto Repository Updated Modification Needed Yes Test Owner Modifies Test
Page 4 of 7
Rev1.0
Gnats updated
No
No
Gnats updated
Yes
No
Page 5 of 7
Rev1.0
1. A semi-automated test should not have any syntax errors. Syntax errors are observed due to
incorrect usage of APIs or TCL commands. This can be avoided by understanding the API usage. 2. Tests should not have any hardcoded IPs or names of test devices, interfaces etc. These should be fetched from rack or configuration file. Most of the ips and device names would be available as public variables in base class. 3. Direct use of runCmds like callgen commands, boxer show commands should be avoided. 4. Tests should not have reference to private data [data in personal clones/homes/localdisks is referred to as private data]. Data like tft/configuration file should be part of BK. 5. Huge configuration in tests should be avoided. It could be part of configuration file. 6. Unwanted configuration should not be present in tests. 7. Configuration, rack files used for test should be shared with automation. 8. Log files generated by executing semi-automated tests should be shared with automation. 9. Unwanted delays in tests should be avoided. If there is a need of hard delays, those should be well documented. 10. Use of dangerous/prohibited commands like deleteContext APIs. Using this API and commands like reload is prohibited as by doing this we re-configure the system afresh and hence automation would loose critical bugs which could be caught otherwise. 11. Tests should not have unwanted callgen arguments. 12. Wherever possible, automation APIs should be used. API document can be referred to fetch correct usage. Automation Engineers should help in locating proper APIs if its not easily visible. 13. Naming convention [same as automation coding guideline] to be followed. 14. Tests should not have monitor started for unnecessary protocols
3. Procedure should clearly indicate the verification points. Like the stats to be checked, 4.
attributes in monitor dump. CLI/Monitor output recorded during test execution should be indicated in test procedures for reference. Mentions radiuslite/callgen versions to be used. By default all tests should use the latest matching versions but sometimes tests use particular callgen/radiuslite versions because of unavailability of needed support or due to PR against other versions. Manual owner mostly has this information. If this information is reflected in test procedure, engineers wont have to dig much if they face issue simulating the test. Specific versions should be kept in common location.
Page 6 of 7
Rev1.0
5. Mentions workaround for any tool related issues. 6. Many times one test combines multiple scenarios. Such tests should be split into multiple tests per scenario. This makes it easy for anyone to debug the test failure and also indicate pass/fail precisely. Else one of the 10 scenarios in tests would fail but whole test would be marked as failed. 7. Tests should mention explicitly applicable version, customers for the test. Applicable version for a test as mentioned in QADB is the version for which test scenario is valid. 8. Test specific configuration should be mentioned explicitly. 9. Pass/fail criterion should match with test purpose and procedure. 10. Procedures should be in-lined with Requirement field (eg PRs). 11. Tests should mention callgen arguments clearly. 12. Tests should be properly formatted. All tests steps should have step numbers indicated.
Page 7 of 7