Beruflich Dokumente
Kultur Dokumente
Copyright Notice This document may be copied in its entirety, or extracts made, if the source is acknowledged.
Certified Tester
Advanced Level Syllabus - Test Analyst
Copyright International Software Testing Qualifications Board (hereinafter called ISTQB). Advanced Level Test Analyst Working Group: Judy McKay (Chair), Mike Smith, Erik Van Veenendaal; 2010-2012.
Page 2 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Revision History
Version ISEB v1.1 ISTQB 1.2E V2007 D100626 D101227 D2011 Alpha 2012 Beta 2012 Beta 2012 Beta 2012 Beta 2012 Date 04SEP01 SEP03 12OCT07 26JUN10 27DEC10 23OCT11 09MAR12 07APR12 07APR12 08JUN12 27JUN12 Remarks ISEB Practitioner Syllabus ISTQB Advanced Level Syllabus from EOQ-SG Certified Tester Advanced Level syllabus version 2007 Incorporation of changes as accepted in 2009, separation of chapters for the separate modules Acceptance of changes to format and corrections that have no impact on the meaning of the sentences. Change to split syllabus, re-worked LOs and text changes to match LOs. Addition of BOs. Incorporation of all comments from NBs received from October release. Incorporation of all comments from NBs received from the Alpha release. Beta Version submitted to GA Copy edited version released to NBs EWG and Glossary comments incorporated
Page 3 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Table of Contents
Revision History ....................................................................................................................................... 3 Table of Contents .................................................................................................................................... 4 Acknowledgements ................................................................................................................................. 6 0. Introduction to this Syllabus................................................................................................................. 7 0.1 Purpose of this Document ............................................................................................................. 7 0.2 Overview ....................................................................................................................................... 7 0.3 Examinable Learning Objectives .................................................................................................. 7 1. Testing Process - 300 mins. ........................................................................................................... 8 1.1 Introduction ................................................................................................................................... 9 1.2 Testing in the Software Development Lifecycle ............................................................................ 9 1.3 Test Planning, Monitoring and Control ........................................................................................ 11 1.3.1 Test Planning ....................................................................................................................... 11 1.3.2 Test Monitoring and Control ................................................................................................ 11 1.4 Test Analysis ............................................................................................................................... 12 1.5 Test Design ................................................................................................................................. 13 1.5.1 Concrete and Logical Test Cases........................................................................................ 13 1.5.2 Creation of Test Cases ........................................................................................................ 14 1.6 Test Implementation .................................................................................................................... 15 1.7 Test Execution ............................................................................................................................ 16 1.8 Evaluating Exit Criteria and Reporting ........................................................................................ 18 1.9 Test Closure Activities ................................................................................................................. 19 2. Test Management: Responsibilities for the Test Analyst - 90 mins............................................. 20 2.1 Introduction ................................................................................................................................. 21 2.2 Test Progress Monitoring and Control ........................................................................................ 21 2.3 Distributed, Outsourced and Insourced Testing.......................................................................... 22 2.4 The Test Analysts Tasks in Risk-Based Testing ........................................................................ 22 2.4.1 Overview .............................................................................................................................. 22 2.4.2 Risk Identification ................................................................................................................. 23 2.4.3 Risk Assessment ................................................................................................................. 23 2.4.4 Risk Mitigation...................................................................................................................... 24 3. Test Techniques - 825 mins. ........................................................................................................ 26 3.1 Introduction ................................................................................................................................. 27 3.2 Specification-Based Techniques ................................................................................................. 27 3.2.1 Equivalence Partitioning ...................................................................................................... 27 3.2.2 Boundary Value Analysis ..................................................................................................... 28 3.2.3 Decision Tables ................................................................................................................... 29 3.2.4 Cause-Effect Graphing ........................................................................................................ 30 3.2.5 State Transition Testing ....................................................................................................... 30 3.2.6 Combinatorial Testing Techniques ...................................................................................... 31 3.2.7 Use Case Testing ................................................................................................................ 33 3.2.8 User Story Testing ............................................................................................................... 33 3.2.9 Domain Analysis .................................................................................................................. 34 3.2.10 Combining Techniques ...................................................................................................... 35 3.3 Defect-Based Techniques ........................................................................................................... 35 3.3.1 Using Defect-Based Techniques ......................................................................................... 35 3.3.2 Defect Taxonomies .............................................................................................................. 36 3.4 Experience-Based Techniques ................................................................................................... 37 3.4.1 Error Guessing ..................................................................................................................... 37 3.4.2 Checklist-Based Testing ...................................................................................................... 38 3.4.3 Exploratory Testing .............................................................................................................. 38 Version 2012 - Beta
International Software Testing Qualifications Board
Page 4 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
4.
5.
6.
7.
8.
9.
3.4.4 Applying the Best Technique ............................................................................................... 39 Testing Software Quality Characteristics - 120 mins. ................................................................... 41 4.1 Introduction ................................................................................................................................. 42 4.2 Quality Characteristics for Business Domain Testing ................................................................. 43 4.2.1 Accuracy Testing ................................................................................................................. 43 4.2.2 Suitability Testing ................................................................................................................. 44 4.2.3 Interoperability Testing ........................................................................................................ 44 4.2.4 Usability Testing .................................................................................................................. 44 4.2.5 Accessibility Testing ............................................................................................................ 47 Reviews - 165 mins....................................................................................................................... 48 5.1 Introduction ................................................................................................................................. 49 5.2 Using Checklists in Reviews ....................................................................................................... 49 Defect Management 120 mins. .................................................................................................. 52 6.1 Introduction ................................................................................................................................. 53 6.2 When Can a Defect be Detected? .............................................................................................. 53 6.3 Defect Report Fields ................................................................................................................... 53 6.4 Defect Classification .................................................................................................................... 54 6.5 Root Cause Analysis ................................................................................................................... 55 Test Tools - 45 mins. .................................................................................................................... 56 7.1 Introduction.................................................................................................................................. 57 7.2 Test Tools and Automation ......................................................................................................... 57 7.2.1 Test Design Tools ................................................................................................................ 57 7.2.2 Test Data Preparation Tools ................................................................................................ 57 7.2.3 Automated Test Execution Tools ......................................................................................... 57 References .................................................................................................................................... 61 8.1 Standards .................................................................................................................................... 61 8.2 ISTQB Documents ...................................................................................................................... 61 8.3 Books .......................................................................................................................................... 61 8.4 Other References ........................................................................................................................ 62 Index ............................................................................................................................................. 63
Page 5 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Acknowledgements
This document was produced by a core team from the International Software Testing Qualifications Board Advanced Level Working Group - Advanced Test Analyst: Judy McKay (Chair), Mike Smith, Erik van Veenendaal. The core team thanks the review team and the National Boards for their suggestions and input. At the time the Advanced Level Syllabus was completed the Advanced Level Working Group had the following membership (alphabetical order): Graham Bath, Rex Black, Maria Clara Choucair, Debra Friedenberg, Bernard Homs (Vice Chair), Paul Jorgensen, Judy McKay, Jamie Mitchell, Thomas Mueller, Klaus Olsen, Kenji Onishi, Meile Posthuma, Eric Riou du Cosquer, Jan Sabak, Hans Schaefer, Mike Smith (Chair), Geoff Thompson, Erik van Veenendaal, Tsuyoshi Yumoto. The following persons participated in the reviewing, commenting and balloting of this syllabus: Graham Bath, Arne Becher, Rex Black, Piet de Roo, Frans Dijkman, Mats Grindal, Kobi Halperin, Bernard Homs, Maria Jnsson, Junfei Ma, Eli Margolin, Rik Marselis, Don Mills, Gary Mogyorodi, Stefan Mohacsi, Reto Mueller, Thomas Mueller, Ingvar Nordstrom, Tal Pe'er, Raluca Madalina Popescu, Stuart Reid, Jan Sabak, Hans Schaefer, Marco Sogliani, Yaron Tsubery, Hans Weiberg, Paul Weymouth, Chris van Bael, Jurian van der Laar, Stephanie van Dijk, Erik van Veenendaal, Wenqiang Zheng, Debi Zylbermann. This document was formally released by the General Assembly of the ISTQB on xxx (TBD).
Page 6 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
0.2 Overview
The Advanced Level is comprised of three separate syllabi: Test Manager Test Analyst Technical Test Analyst The Advanced Level Overview document [ISTQB_AL_OVIEW] includes the following information: Business Outcomes for each syllabus Summary for each syllabus Relationships between the syllabi Description of cognitive levels (K-levels) Appendices
Page 7 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Learning Objectives for Testing Process 1.2 Testing in the Software Development Lifecycle
TA-1.2.1 (K2) Explain how and why the timing and level of involvement for the Test Analyst varies when working with different lifecycle models
Page 8 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
1.1 Introduction
In the ISTQB Foundation Level syllabus, the fundamental test process was described as including the following activities: Planning, monitoring and control Analysis and design Implementation and execution Evaluating exit criteria and reporting Test closure activities At the Advanced Level, some of these activities are considered separately in order to provide additional refinement and optimization of the processes, to better fit the software development lifecycle, and to facilitate effective test monitoring and control. The activities at this level are considered as follows: Planning, monitoring and control Analysis Design Implementation Execution Evaluating exit criteria and reporting Test closure activities These activities can be implemented sequentially or some can be implemented in parallel, e.g., design could be performed in parallel with implementation (e.g., exploratory testing). Determining the right tests and test cases, designing them and executing them are the primary areas of concentration for the Test Analyst. While it is important to understand the other steps in the test process, the majority of the Test Analysts work usually is done during the analysis, design, implementation and execution activities of the testing project. Advanced testers face a number of challenges when introducing the different testing aspects described in this syllabus into the context of their own organizations, teams and tasks. It is important to consider the different software development lifecycles as well as the type of system being tested as these factors can influence the approach to testing.
Page 9 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Testing activities must be aligned with the chosen software development lifecycle model whose nature may be sequential, iterative, or incremental. For example, in the sequential V-model, the ISTQB fundamental test process applied to the system test level could align as follows: System test planning occurs concurrently with project planning, and test control continues until system test execution and closure are complete. System test analysis and design occur concurrently with requirements specification, system and architectural (high-level) design specification, and component (low-level) design specification. System test environment (e.g., test beds, test rig) implementation might start during system design, though the bulk of it typically would occur concurrently with coding and component test, with work on system test implementation activities stretching often until just days before the start of system test execution. System test execution begins when the system test entry criteria are all met (or waived), which typically means that at least component testing and often also component integration testing are complete. System test execution continues until the system test exit criteria are met. Evaluation of system test exit criteria and reporting of system test results occur throughout system test execution, generally with greater frequency and urgency as project deadlines approach. System test closure activities occur after the system test exit criteria are met and system test execution is declared complete, though they can sometimes be delayed until after acceptance testing is over and all project activities are finished. Iterative and incremental models may not follow the same order of tasks and may exclude some tasks. For example, an iterative model may utilize a reduced set of the standard test processes for each iteration. Analysis and design, implementation and execution, and evaluation and reporting may be conducted for each iteration, whereas planning is done at the beginning of the project and the closure reporting is done at the end. In an Agile project, it is common to use a less formalized process and a much closer working relationship that allows changes to occur more easily within the project. Because Agile is a light weight process, there is less comprehensive test documentation in favor of having a more rapid method of communication such as daily stand up meetings (called stand up because they are very quick, usually 10-15 minutes, so no one needs to sit down and everyone stays engaged). Agile projects, out of all the lifecycle models, require the earliest involvement from the Test Analyst. The Test Analyst should expect to be involved from the initiation of the project, working with the developers as they do their initial architecture and design work. Reviews may not be formalized but are continuous as the software evolves. Involvement is expected to be throughout the project and the Test Analyst should be available to the team. Because of this immersion, members of Agile teams are usually dedicated to single projects and are fully involved in all aspects of the project. Iterative/incremental models range from the Agile approach, where there is an expectation for change as the software evolves, to iterative/incremental development models that exist within a V-model (sometimes called embedded iterative). In the case with an embedded iterative model, the Test Analyst should expect to be involved in the standard planning and design aspects, but would then move to a more interactive role as the software is developed, tested, changed and deployed. Whatever the software development lifecycle being used, it is important for the Test Analyst to understand the expectations for involvement as well as the timing of that involvement. There are many hybrid models in use, such as the iterative within a V-model noted above. The Test Analyst often must determine the most effective role and work toward that rather than depending on the definition of a set model to indicate the proper moment of involvement.
Page 10 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Page 11 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
that is achieved will roll up into the overall project metrics. It is important that the information entered into the various tracking tools be as accurate as possible so the metrics reflect reality. Accurate metrics allow managers to manage a project (monitor) and to initiate changes as needed (control). For example, a high number of defects being reported from one area of the software may indicate that additional testing effort is needed in that area. Requirements and risk coverage information (traceability) may be used to prioritize remaining work and to allocate resources. Root cause information is used to determine areas for process improvement. If the data that is recorded is accurate, the project can be controlled and accurate status information can be reported to the stakeholders. Future projects can be planned more effectively when the planning considers data gathered from past projects. There are myriad uses for accurate data. It is part of the Test Analysts job to ensure that the data is accurate, timely and objective.
Page 12 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Page 13 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Page 14 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Lifecycle model used (e.g., an Agile approach aims for just enough documentation) The requirement for traceability from the test basis through test analysis and design Depending on the scope of the testing, test analysis and design address the quality characteristics for the test object(s). The ISO 25000 standard [ISO25000] (which is replacing ISO 9126) provides a useful reference. When testing hardware/software systems, additional characteristics may apply. The processes of test analysis and test design may be enhanced by intertwining them with reviews and static analysis. In fact, conducting the test analysis and test design are often a form of static testing because problems may be found in the basis documents during this process. Test analysis and test design based on the requirements specification is an excellent way to prepare for a requirements review meeting. Reading the requirements to use them for creating tests requires understanding the requirement and being able to determine a way to assess fulfillment of the requirement. This activity often uncovers requirements that are not clear, are untestable or do not have defined acceptance criteria. Similarly, test work products such as test cases, risk analyses, and test plans should be subjected to reviews. Some projects, such as those following an Agile lifecycle, may have only minimally documented requirements. These are sometimes in the form of user stories which describe small but demonstrable bits of functionality. A user story should include a definition of the acceptance criteria. If the software is able to demonstrate that it has fulfilled the acceptance criteria, it is usually considered to be ready for integration with the other completed functionality or may already have been integrated in order to demonstrate its functionality. During test design the required detailed test infrastructure requirements may be defined, although in practice these may not be finalized until test implementation. It must be remembered that test infrastructure includes more than test objects and testware. For example the infrastructure requirements may include rooms, equipment, personnel, software, tools, peripherals, communications equipment, user authorizations, and all other items required to run the tests. The exit criteria for test analysis and test design will vary depending on the project parameters, but all items discussed in these two sections should be considered for inclusion in the defined exit criteria. It is important that the criteria be measurable and ensure that all the information and preparation required for the subsequent steps have been provided.
Page 15 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
team to ensure that the software will be released for testing in a testable order. During test implementation, Test Analysts should finalize and confirm the order in which manual and automated tests are to be run, carefully checking for constraints that might require tests to be run in a particular order. Dependencies must be documented and checked. The level of detail and associated complexity for work done during test implementation may be influenced by the detail of the test cases and test conditions. In some cases regulatory rules apply, and tests should provide evidence of compliance to applicable standards such as the United States Federal Aviation Administrations DO-178B/ED 12B [RTCA DO-178B/ED-12B]. As specified above, test data is needed for testing, and in some cases these sets of data can be quite large. During implementation, Test Analysts create input and environment data to load into databases and other such repositories. Test Analysts also create data to be used with data-driven automation tests as well as for manual testing. Test implementation is also concerned with the test environment(s). During this stage the environment(s) should be fully set up and verified prior to test execution. A "fit for purpose" test environment is essential, i.e., the test environment should be capable of enabling the exposure of the defects present during controlled testing, operate normally when failures are not occurring, and adequately replicate, if required, the production or end-user environment for higher levels of testing. Test environment changes may be necessary during test execution depending on unanticipated changes, test results or other considerations. If environment changes do occur during execution, it is important to assess the impact of the changes to tests that have already been run. During test implementation, testers must ensure that those responsible for the creation and maintenance of the test environment are known and available, and that all the testware and test support tools and associated processes are ready for use. This includes configuration management, defect management, and test logging and management. In addition, Test Analysts must verify the procedures that gather data for exit criteria evaluation and test results reporting. It is wise to use a balanced approach to test implementation as determined during test planning. For example, risk-based analytical test strategies are often blended with dynamic test strategies. In this case, some percentage of the test implementation effort is allocated to testing which does not follow predetermined scripts (unscripted). Unscripted testing should not be ad hoc or aimless as this can be unpredictable in duration and coverage unless time boxed and chartered. Over the years, testers have developed a variety of experience-based techniques, such as attacks, error guessing [Myers79], and exploratory testing. Test analysis, test design, and test implementation still occur, but they occur primarily during test execution. When following such dynamic test strategies, the results of each test influence the analysis, design, and implementation of the subsequent tests. While these strategies are lightweight and often effective at finding defects, there are some drawbacks. These techniques require expertise from the Test Analyst, duration can be difficult to predict, coverage can be difficult to track and repeatability can be lost without good documentation or tool support.
Page 16 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
testing techniques helps to guard against test escapes due to gaps in scripted coverage and to circumvent the pesticide paradox. At the heart of the test execution activity is the comparison of actual results with expected results. Test Analysts must bring attention and focus to these tasks, otherwise all the work of designing and implementing the test can be wasted when failures are missed (false-negative result) or correct behavior is misclassified as incorrect (false-positive result). If the expected and actual results do not match, an incident has occurred. Incidents must be carefully scrutinized to determine the cause (which might or might not be a defect in the test object) and to gather data to assist with the resolution of the incident (see Chapter 6 for further details on defect management). When a failure is identified, the test documentation (test specification, test case, etc.) should be carefully evaluated to ensure correctness. A test document can be incorrect for a number of reasons. If it is incorrect, it should be corrected and the test should be re-run. Since changes in the test basis and the test object can render a test case incorrect even after the test has been run successfully many times, testers should remain aware of the possibility that the observed results could be due to an incorrect test. During test execution, test results must be logged appropriately. Tests which were run but for which results were not logged may have to be repeated to identify the correct result, leading to inefficiency and delays. (Note that adequate logging can address the coverage and repeatability concerns associated with test techniques such as exploratory testing.) Since the test object, testware, and test environments may all be evolving, logging should identify the specific versions tested as well as specific environment configurations. Test logging provides a chronological record of relevant details about the execution of tests. Results logging applies both to individual tests and to activities and events. Each test should be uniquely identified and its status logged as test execution proceeds. Any events that affect the test execution should be logged. Sufficient information should be logged to measure test coverage and document reasons for delays and interruptions in testing. In addition, information must be logged to support test control, test progress reporting, measurement of exit criteria, and test process improvement. Logging varies depending on the level of testing and the strategy. For example, if automated component testing is occurring, the automated tests should produce most of the logging information. If manual testing is occurring, the Test Analyst will log the information regarding the test execution, often into a test management tool that will track the test execution information. In some cases, as with test implementation, the amount of test execution information that is logged is influenced by regulatory or audit requirements. In some cases, users or customers may participate in test execution. This can serve as a way to build their confidence in the system, though that presumes that the tests find few defects. Such an assumption is often invalid in early test levels, but might be valid during acceptance test. The following are some specific areas that should be considered during test execution: Notice and explore irrelevant oddities. Observations or results that may seem irrelevant are often indicators for defects that (like icebergs) are lurking beneath the surface. Check that the product is not doing what it is not supposed to do. Checking that the product does what it is supposed to do is a normal focus of testing, but the Test Analyst must also be sure the product is not misbehaving by doing something it should not be doing (for example, additional undesired functions). Build the test suite and expect it to grow and change over time. The code will evolve and additional tests will need to be implemented to cover these new functionalities, as well as to
Page 17 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
check for regressions in other areas of the software. Gaps in testing are often discovered during execution. Building the test suite is a continuous process. Take notes for the next testing effort. The testing tasks do not end when the software is provided to the user or distributed to the market. A new version or release of the software will most likely be produced, so knowledge should be stored and transferred to the testers responsible for the next testing effort. Do not expect to rerun all manual tests. It is unrealistic to expect that all manual tests will be rerun. If a problem is suspected, the Test Analyst should investigate it and note it rather than assume it will be caught in a subsequent execution of the test cases. Mine the data in the defect tracking tool for additional test cases. Consider creating test cases for defects that were discovered during unscripted or exploratory testing and add them to the regression test suite. Find the defects before regression testing. Time is often limited for regression testing and finding failures during regression testing can result in schedule delays. Regression tests generally do not find a large proportion of the defects, mostly because they are tests which have already been run (e.g., for a previous version of same software), and defects should have been detected in those previous runs. This does not mean that regression tests should be eliminated altogether, only that the effectiveness of regression tests, in terms of the capacity to detect new defects, is lower than other tests.
Page 18 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Page 19 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Learning Objectives for Test Management: Responsibilities for the Test Analyst 2.2 Test Progress Monitoring and Control
TA-2.2.1 (K2) Explain the types of information that must be tracked during testing to enable adequate monitoring and controlling of the project
Page 20 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
2.1 Introduction
While there are many areas in which the Test Analyst interacts with and supplies data for the Test Manager, this section concentrates on the specific areas of the testing process in which the Test Analyst is a major contributor. It is expected that the Test Manager will seek the information needed from the Test Analyst.
Page 21 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
requirement A will be considered to be fulfilled. This may or may not be correct. In many cases, more test cases are needed to thoroughly test a requirement, but because of limited time, only a subset of those tests is actually created. For example, if 20 test cases were needed to thoroughly test the implementation of a requirement, but only 10 were created and run, then the requirements coverage information will indicate 100% coverage when in fact only 50% coverage was achieved. Accurate tracking of the coverage as well as tracking the reviewed status of the requirements themselves can be used as a confidence measure. The amount (and level of detail) of information to be recorded depends on several factors, including the software development lifecycle model. For example, in Agile projects typically less status information will be recorded due to the close interaction of the team and more face-to-face communication.
Page 22 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
These tasks are performed iteratively throughout the project lifecycle to deal with emerging risks, changing priorities and to regularly evaluate and communicate risk status. Test Analysts should work within the risk-based testing framework established by the Test Manager for the project. They should contribute their knowledge of the business domain risks that are inherent in the project such as risks related to safety, business and economic concerns, and political factors.
Page 23 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Loss of customers Given the available risk information, the Test Analyst needs to establish the levels of business risk per the guidelines established by the Test Manager. These could be classified with terms (e.g., low, medium, high) or numbers. Unless there is a way to objectively measure the risk on a defined scale it cannot be a true quantitative measure. Accurately measuring probability/likelihood and cost/consequence is usually very difficult, so determining risk level is usually done qualitatively. Numbers may be assigned to the qualitative value, but that does not make it a true quantitative measure. For example, the Test Manager may determine that business risk should be categorized with a value from 1 to 10, with 1 being the highest, and therefore riskiest, impact to the business. Once the likelihood (the assessment of the technical risk) and impact (the assessment of the business risk) have been assigned, these values may be multiplied together to determine the overall risk rating for each risk item. That overall rating is then used to prioritize the risk mitigation activities. Some riskbased testing models, such as PRISMA [vanVeenendaal12], do not combine the risk values, allowing the test approach to address the technical and business risks separately.
Page 24 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
whether to extend testing or to transfer the remaining risk onto the users, customers, help desk/technical support, and/or operational staff. 2.4.4.2 Adjusting Testing for Future Test Cycles Risk assessment is not a one-time activity performed before the start of test implementation; it is a continuous process. Each future planned test cycle should be subjected to new risk analysis to take into account such factors as: Any new or significantly changed product risks Unstable or defect-prone areas discovered during the testing Risks from fixed defects Typical defects found during testing Areas that have been under-tested (low test coverage) If additional time for testing is allocated it may be possible to expand the risk coverage into areas of lower risk.
Page 25 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Page 26 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
3.1 Introduction
The test design techniques considered in this chapter are divided into the following categories: Specification-based (or behavior-based or black box) Defect-based Experience-based These techniques are complementary and may be used as appropriate for any given test activity, regardless of which level of testing is being performed. Note that all three categories of techniques can be used to test both functional or non-functional quality characteristics. Testing non-functional characteristics is discussed in the next chapter. The test design techniques discussed in these sections may focus primarily on determining optimal test data (e.g., equivalence partitions) or deriving test sequences (e.g., state models). It is common to combine techniques to create complete test cases.
Page 27 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
tested). This technique is strongest when used in combination with boundary value analysis which expands the test values to include those on the edges of the partitions. This is a commonly used technique for smoke testing a new build or a new release as it quickly determines if basic functionality is working. Limitations/Difficulties If the assumption is incorrect and the values in the partition are not handled in exactly the same way, this technique may miss defects. It is also important to select the partitions carefully. For example, an input field that accepts positive and negative numbers would be better tested as two valid partitions, one for the positive numbers and one for the negative numbers, because of the likelihood of different handling. Depending on whether or not zero is allowed, this could become another partition as well. It is important for the Test Analyst to understand the underlying processing in order to determine the best partitioning of the values. Coverage Coverage is determined by taking the number of partitions for which a value has been tested and dividing that number by the number of partitions that have been identified. Using multiple values for a single partition does not increase the coverage percentage. Types of Defects This technique finds functional defects in the handling of various data values.
Page 28 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
tested. Only ordered partitions can be used for boundary value analysis but this is not limited to a range of valid inputs. For example, when testing for the number of cells supported by a spreadsheet, there is a partition that contains the number of cells up to and including the maximum allowed cells (the boundary) and another partition that begins with one cell over the maximum (over the boundary). Coverage Coverage is determined by taking the number of boundary conditions that are tested and dividing that by the number of identified boundary conditions (either using the two value or three value method). This will provide the coverage percentage for the boundary testing. Types of Defects Boundary value analysis reliably finds displacement or omission of boundaries, and may find cases of extra boundaries. This technique finds defects regarding the handling of the boundary values, particularly errors with less-than and greater-than logic (i.e., displacement). It can also be used to find non-functional defects, for example tolerance of load limits (e.g., system supports 10,000 concurrent users).
Page 29 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Types of Defects Typical defects include incorrect processing based on particular combinations of conditions resulting in unexpected results. During the creation of the decision tables, defects may be found in the specification document. The most common types of defects are omissions (there is no information regarding what should actually happen in a certain situation) and contradictions. Testing may also find issues with condition combinations that are not handled or are not handled well.
Page 30 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
software are good candidates for this type of testing. Control systems, i.e., traffic light controllers, are also good candidates for this type of testing. Limitations/Difficulties Determining the states is often the most difficult part of defining the state table or diagram. When the software has a user interface, the various screens that are displayed for the user are often used to define the states. For embedded software, the states may be dependent upon the states that the hardware will experience. Besides the states themselves, the basic unit of state transition testing is the individual transition, also known as a 0-switch. Simply testing all transitions will find some kinds of state transition defects, but more may be found by testing sequences of transactions. A sequence of two successive transitions is called a 1-switch; a sequence of three successive transitions is a 2-switch, and so forth. (These switches are sometimes alternatively designated as N-1 switches, where N represents the number of transitions that will be traversed. A single transition, for instance (a 0-switch), would be a 1-1 switch. [Bath08] Coverage As with other types of test techniques, there is a hierarchy of levels of test coverage. The minimum acceptable degree of coverage is to have visited every state and traversed every transition. 100% transition coverage (also known as 100% 0-switch coverage or 100% logical branch coverage) will guarantee that every state is visited, unless the system design or the state transition model (diagram or table) are defective. Depending on the relationships between states and transitions, it may be necessary to traverse some transitions more than once in order to execute other transitions a single time. The term "n-switch coverage" refers to the use of switches of length greater than one transition. For example, achieving 100% 1-switch coverage requires that every valid sequence of two successive transitions has been tested at least once. This testing may stimulate some types of failures that 100% 0-switch coverage would miss. "Round-trip coverage" applies to situations in which sequences of transitions form loops. 100% roundtrip coverage is achieved when all loops from any state back to the same state have been tested. This must be tested for all states that are included in loops. "Full coverage" is achieved when all possible sequences of length n-1 have been tested, where n is the number of states in the test item. For any of these approaches, a still higher degree of coverage will attempt to include all invalid transitions. Coverage requirements and covering sets for state transition testing must identify whether invalid transitions are included. Types of Defects Typical defects include incorrect processing in the current state that is a result of the processing that occurred in a previous state, incorrect or unsupported transitions, states with no exits and the need for states or transitions that do not exist. During the creation of the state machine model, defects may be found in the specification document. The most common types of defects are omissions (there is no information regarding what should actually happen in a certain situation) and contradictions.
Page 31 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
excluded, if certain options are incompatible. This does not assume that the combined factors wont affect each other; they very well might, but should affect each other in acceptable ways. Combinatorial testing provides a means to identify a suitable subset of these combinations to achieve a predetermined level of coverage. The number of items to include in the combinations can be selected by the Test Analyst, including single items, pairs, triples or more [Copeland03]. There are a number of tools available to aid the Test Analyst in this task (see www.pairwise.org for samples). These tools either require the parameters and their values to be listed (pairwise testing and orthogonal array testing) or to be represented in a graphical form (classification trees) [Grochtmann94]. Pairwise testing is a method applied to testing pairs of variables in combination. Orthogonal arrays are predefined, mathematically accurate tables that allow the Test Analyst to substitute the items to be tested for the variables in the array, producing a set of combinations that will achieve a level of coverage when tested [Koomen06]. Classification tree tools allow the Test Analyst to define the size of combinations to be tested (i.e., combinations of two values, three values, etc.). Applicability The problem of having too many combinations of parameter values manifests in at least two different situations related to testing. Some test cases contain several parameters each with a number of possible values, for instance a screen with several input fields. In this case, combinations of parameter values make up the input data for the test cases. Furthermore, some systems may be configurable in a number of dimensions, resulting in a potentially large configuration space. In both these situations, combinatorial testing can be used to identify a subset of combinations, feasible in size. For parameters with a large number of values, equivalence class partitioning, or some other selection mechanism may first be applied to each parameter individually to reduce the number of values for each parameter, before combinatorial testing is applied to reduce the set of resulting combinations. These techniques are usually applied to the integration, system and system integration levels of testing. Limitations/Difficulties The major limitation with these techniques is the assumption that the results of a few tests are representative of all tests and that those few tests represent expected usage. If there is an unexpected interaction between certain variables, it may go undetected with this type of testing if that particular combination is not tested. These techniques can be difficult to explain to a non-technical audience as they may not understand the logical reduction of tests. Identifying the parameters and their respective values is sometimes difficult. Finding a minimal set of combinations to satisfy a certain level of coverage is difficult to do manually. Tools usually are used to find the minimum set of combinations. Some tools support the ability to force some (sub-) combinations to be included in or excluded from the final selection of combinations. This capability may be used by the Test Analyst to emphasize or de-emphasize factors based on domain knowledge or product usage information. Coverage There are several levels of coverage. The lowest level of coverage is called 1-wise or singleton coverage. It requires each value of every parameter be present in at least one of the chosen combinations. The next level of coverage is called 2-wise or pairwise coverage. It requires every pair of values of any two parameters be included in at least one combination. This idea can be generalized to n-wise coverage, which requires every sub-combination of values of any set of n parameters be included in the set of selected combinations. The higher the n, the more combinations needed to reach 100% coverage. Minimum coverage with these techniques is to have one test case for every combination produced by the tool. Version 2012 - Beta
International Software Testing Qualifications Board
Page 32 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Types of Defects The most common type of defects found with this type of testing is defects related to the combined values of several parameters.
Page 33 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Because stories are little increments of functionality, there may be a requirement to produce drivers and stubs in order to actually test the piece of functionality that is delivered. This usually requires an ability to program and to use tools that will help with the testing such as API testing tools. Creation of the drivers and stubs is usually the responsibility of the developer, although a Technical Test Analyst also may be involved in producing this code and utilizing the API testing tools. If a continuous integration model is used, as is the case in most Agile projects, the need for drivers and stubs is minimized. Coverage Minimum coverage of a user story is to verify that each of the specified acceptance criteria has been met. Types of Defects Defects are usually functional in that the software fails to provide the specified functionality. Defects are also seen with integration issues of the functionality in the new story with the functionality that already exists. Because stories may be developed independently, performance, interface and error handling issues may be seen. It is important for the Test Analyst to perform both testing of the individual functionality supplied as well as integration testing anytime a new story is released for testing.
Page 34 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
and OUT variables may land in the undetected domain. Domain analysis is a strong technique to use when working with a developer to define the testing areas. Coverage Minimum coverage for domain analysis is to have a test for each IN, OUT, ON and OFF value in each domain. Where there is an overlap of the values (for example, the OUT value of one domain is an IN value in another domain), there is no need to duplicate the tests. Because of this, the actual tests needed are often less than four per domain. Types of Defects Defects include functional problems within the domain, boundary value handling, variable interaction issues and error handling (particularly for the values that are not in a valid domain).
Page 35 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
have compiled their own taxonomies of likely or frequently seen defects. Whatever taxonomy is used, it is important to define the expected coverage prior to starting the testing. Coverage The technique provides coverage criteria which are used to determine when all the useful test cases have been identified. As a practical matter, the coverage criteria for defect-based techniques tend to be less systematic than for specification-based techniques in that only general rules for coverage are given and the specific decision about what constitutes the limit of useful coverage is discretionary. As with other techniques, the coverage criteria do not mean that the entire set of tests is complete, but rather that defects being considered no longer suggest any useful tests based on that technique. Types of Defects The types of defects discovered usually depend on the taxonomy in use. If a user interface taxonomy is used, the majority of the discovered defects would likely be user interface related, but other defects can be discovered as a byproduct of the specific testing.
Page 36 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
The more detailed the taxonomy, the more time it will take to develop and maintain it, but it will result in a higher level of reproducibility in the test results. Detailed taxonomies can be redundant, but they allow a test team to divide up the testing without a loss of information or coverage. Once the appropriate taxonomy has been selected, it can be used for creating test conditions and test cases. A risk-based taxonomy can help the testing focus on a specific risk area. Taxonomies can also be used for non-functional areas such as usability, performance, etc. Taxonomy lists are available in various publications, from IEEE, and on the Internet.
Page 37 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Coverage When a taxonomy is used, coverage is determined by the appropriate data faults and types of defects. Without a taxonomy, coverage is limited by the experience and knowledge of the tester and the time available. The yield from this technique will vary based on how well the tester can target problematic areas. Types of Defects Typical defects are usually those defined in the particular taxonomy or guessed by the Test Analyst, that might not have been found in specification-based testing.
Page 38 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Applicability Good exploratory testing is planned, interactive, and creative. It requires little documentation about the system to be tested and is often used in situations where the documentation is not available or is not adequate for other testing techniques. Exploratory testing is often used to augment other testing and to serve as a basis for the development of additional test cases. Limitations/Difficulties Exploratory testing can be difficult to manage and schedule. Coverage can be sporadic and reproducibility is difficult. Using charters to designate the areas to be covered in a testing session and time-boxing to determine the time allowed for the testing is one method used to manage exploratory testing. At the end of a testing session or set of sessions, the test manager may hold a debriefing session to gather the results of the tests and determine the charters for the next sessions. Debriefing sessions are difficult to scale for large testing teams or large projects. Another difficulty with exploratory sessions is to accurately track them in a test management system. This is sometimes done by creating test cases that are actually exploratory sessions. This allows the time allocated for the exploratory testing and the planned coverage to be tracked with the other testing efforts. Since reproducibility may be difficult with exploratory testing, this can also cause problems when needing to recall the steps to reproduce a failure. Some organizations use the capture/playback capability of a test automation tool to record the steps taken by an exploratory tester. This provides a complete record of all activities during the exploratory session (or any experience-based testing session). Digging through the details to find the actual cause of the failure can be tedious, but at least there is a record of all the steps that were involved. Coverage Charters may be created to specify tasks, objectives, and deliverables. Exploratory sessions are then planned to achieve those objectives. The charter may also identify where to focus the testing effort, what is in and out of scope of the testing session, and what resources should be committed to complete the planned tests. A session may be used to focus on particular defect types and other potentially problematic areas that can be addressed without the formality of scripted testing. Types of Defects Typical defects found with exploratory testing are scenario-based issues that were missed during scripted functional testing, issues that fall between functional boundaries, and workflow related issues. Performance and security issues are also sometimes uncovered during exploratory testing.
Page 39 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
these techniques. As with the specification-based techniques, there is not one perfect technique for all situations. It is important for the Test Analyst to understand the advantages and disadvantages of each technique and to be able to select the best technique or set of techniques for the situation, considering the project type, schedule, access to information, skills of the tester and other factors that can influence the selection.
Page 40 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Learning Objectives for Testing Software Quality Characteristics 4.2 Quality Characteristics for Business Domain Testing
TA-4.2.1 TA-4.2.2 TA-4.2.3 TA-4.2.4 (K2) Explain by example what testing techniques are appropriate to test accuracy, suitability, interoperability and compliance characteristics. (K2) For the accuracy, suitability and interoperability characteristics, define the typical defects to be targeted (K2) For the accuracy, suitability and interoperability characteristics, define when the characteristic should be tested in the lifecycle (K4) For a given project context, outline the approaches that would be suitable to verify and validate both the implementation of the usability requirements and the fulfillment of the user's expectations
Page 41 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
4.1 Introduction
While the previous chapter described specific techniques available to the tester, this chapter considers the application of those techniques in evaluating the principal characteristics used to describe the quality of software applications or systems. This syllabus discusses the quality characteristics which may be evaluated by a Test Analyst. The attributes to be evaluated by the Technical Test Analyst are considered in the Advanced Technical Test Analyst syllabus. The description of product quality characteristics provided in ISO 9126 is used as a guide to describing the characteristics. Other standards, such as the ISO 25000 [ISO25000] series (which has superseded ISO 9126) may also be of use. The ISO quality characteristics are divided into product quality characteristics (attributes), each of which may have sub-characteristics (sub-attributes). These are shown in the table below, together with an indication of which characteristics/sub-characteristics are covered by the Test Analyst and Technical Test Analyst syllabi: Characteristic Functionality Sub-Characteristics Accuracy, suitability, interoperability, compliance Security Maturity (robustness), fault-tolerance, recoverability, compliance Understandability, learnability, operability, attractiveness, compliance Performance (time behavior), resource utilization, compliance Analyzability, changeability, stability, testability, compliance Adaptability, installability, co-existence, replaceability, compliance Test Analyst X X X X X X X Technical Test Analyst
The Test Analyst should concentrate on the software quality characteristics of functionality and usability. Accessibility testing should also be conducted by the Test Analyst. Although it is not listed as a sub-characteristic, accessibility is often considered to be part of usability testing. Testing for the other quality characteristics is usually considered to be the responsibility of the Technical Test Analyst. While this allocation of work may vary in different organizations, it is the one that is followed in these ISTQB syllabi. The sub-characteristic of compliance is shown for each of the quality characteristics. In the case of certain safety-critical or regulated environments, each quality characteristic may have to comply with specific standards and regulations (e.g., functionality compliance may indicate that the functionality comply with a specific standard such as using a particular communication protocol in order to be able to send/receive data from a chip). Because those standards can vary widely depending on the industry, they will not be discussed in depth here. If the Test Analyst is working in an environment that is affected by compliance requirements, it is important to understand those requirements and to ensure that both the testing and the test documentation will fulfill the compliance requirements. For all of the quality characteristics and sub-characteristics discussed in this section, the typical risks must be recognized so that an appropriate testing strategy can be formed and documented. Quality characteristic testing requires particular attention to lifecycle timing, required tools, software and documentation availability, and technical expertise. Without planning a strategy to deal with each characteristic and its unique testing needs, the tester may not have adequate planning, ramp up and test execution time built into the schedule. Some of this testing, e.g., usability testing, can require Version 2012 - Beta
International Software Testing Qualifications Board
Page 42 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
allocation of special human resources, extensive planning, dedicated labs, specific tools, specialized testing skills and, in most cases, a significant amount of time. In some cases, usability testing may be performed by a separate group of usability, or user experience, experts. Quality characteristic and sub-characteristic testing must be integrated into the overall testing schedule, with adequate resources allocated to the effort. Each of these areas has specific needs, targets specific issues and may occur at different times during the software development lifecycle, as discussed in the sections below. While the Test Analyst may not be responsible for the quality characteristics that require a more technical approach, it is important that the Test Analyst be aware of the other characteristics and understand the overlap areas for testing. For example, a product that fails performance testing will also likely fail in usability testing if it is too slow for the user to use effectively. Similarly, a product with interoperability issues with some components is probably not ready for portability testing as that will tend to obscure the more basic problems when the environment is changed.
Page 43 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Page 44 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
disabled users. Checking that applications and web sites are usable for the above users may also improve the usability for everyone else. Accessibility is discussed more below. Usability testing tests the ease by which users can use or learn to use the system to reach a specified goal in a specific context. Usability testing is directed at measuring the following: Effectiveness - capability of the software product to enable users to achieve specified goals with accuracy and completeness in a specified context of use Efficiency - capability of the product to enable users to expend appropriate amounts of resources in relation to the effectiveness achieved in a specified context of use Satisfaction - capability of the software product to satisfy users in a specified context of use Attributes that may be measured include: Understandability - attributes of the software that affect the effort required by the user to recognize the logical concept and its applicability Learnability - attributes of software that affect the effort required by the user to learn the application Operability - attributes of the software that affect the effort required by the user to conduct tasks effectively and efficiently Attractiveness - the capability of the software to be liked by the user Usability testing is usually conducted in two steps: Formative Usability Testing - testing that is conducted iteratively during the design and prototyping stages to help guide (or "form") the design by identifying usability design defects Summative Usability Testing - testing that is conducted after implementation to measure the usability and identify problems with a completed component or system Usability tester skills should include expertise or knowledge in the following areas: Sociology Psychology Conformance to national standards (including accessibility standards) Ergonomics 4.2.4.1 Conducting Usability Tests Validation of the actual implementation should be done under conditions as close as possible to those under which the system will be used. This may involve setting up a usability lab with video cameras, mock up offices, review panels, users, etc., so that development staff can observe the effect of the actual system on real people. Formal usability testing often requires some amount of preparing the users (these could be real users or user representatives) either by providing set scripts or instructions for them to follow. Other free form tests allow the user to experiment with the software so the observers can determine how easy or difficult it is for the user to figure out how to accomplish their tasks. Many usability tests may be executed by the Test Analyst as part of other tests, for example during functional system test. To achieve a consistent approach to the detection and reporting of usability defects in all stages of the lifecycle, usability guidelines may be helpful. Without usability guidelines, it may be difficult to determine what is unacceptable usability. For example, is it unreasonable for a user to have to make 10 mouse clicks to log into an application? Without specific guidelines, the Test Analyst can be in the difficult position of defending defect reports that the developer wants to close because the software works as designed. It is very important to have the verifiable usability specifications defined in the requirements as well as to have a set of usability guidelines that are applied to all similar projects. The guidelines should include such items as accessibility of instructions, clarity of prompts, number of clicks to complete an activity, error messaging, processing indicators (some type of indicator for the user that the system is processing and cannot accept further inputs at Version 2012 - Beta
International Software Testing Qualifications Board
Page 45 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
the time), screen layout guidelines, use of colors and sounds and other factors that affect the users experience. 4.2.4.2 Usability Test Specification Principal techniques for usability testing are: Inspecting, evaluating or reviewing Dynamically interacting with prototypes Verifying and validating the actual implementation Conducting surveys and questionnaires Inspecting, evaluating or reviewing Inspection or review of the requirements specification and designs from a usability perspective that increase the users level of involvement can be cost effective by finding problems early. Heuristic evaluation (systematic inspection of a user interface design for usability) can be used to find the usability problems in the design so that they can be attended to as part of an iterative design process. This involves having a small set of evaluators examine the interface and judge its compliance with recognized usability principles (the "heuristics"). Reviews are more effective when the user interface is more visible. For example, sample screen shots are usually easier to understand and interpret than a narrative description of the functionality provided by a particular screen. Visualization is important for an adequate usability review of the documentation. Dynamically interacting with prototypes When prototypes are developed, the Test Analyst should work with the prototypes and help the developers evolve the prototype by incorporating user feedback into the design. In this way, prototypes can be refined and the user can get a more realistic view of how the finished product will look and feel. Verifying and validating the actual implementation Where the requirements specify usability characteristics for the software (e.g., the number of mouse clicks to accomplish a specific goal), test cases should be created to verify that the software implementation has included these characteristics. For performing validation of the actual implementation, tests specified for functional system test may be developed as usability test scenarios. These test scenarios measure specific usability characteristics, such as learnability or operability, rather than functional outcomes. Test scenarios for usability may be developed to specifically test syntax and semantics. Syntax is the structure or grammar of the interface (e.g., what can be entered in an input field) whereas semantics describes the meaning and purpose (e.g., reasonable and meaningful system messages and output provided to the user) of the interface. Black box techniques (for example those described in Section 3.2), particularly use cases which can be defined in plain text or with UML (Unified Modeling Language), are sometimes employed in usability testing. Test scenarios for usability testing also need to include user instructions, allocation of time for pre- and post-test interviews for giving instructions and receiving feedback and an agreed protocol for conducting the sessions. This protocol includes a description of how the test will be carried out, timings, note taking and session logging, and the interview and survey methods to be used. Conducting surveys and questionnaires Survey and questionnaire techniques may be applied to gather observations and feedback regarding user behavior with the system. Standardized and publicly available surveys such as SUMI (Software Usability Measurement Inventory) and WAMMI (Website Analysis and MeasureMent Inventory) permit benchmarking against a database of previous usability measurements. In addition, since SUMI Version 2012 - Beta
International Software Testing Qualifications Board
Page 46 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
provides concrete measurements of usability, this can provide a set of completion / acceptance criteria.
Page 47 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Page 48 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
5.1 Introduction
A successful review process requires planning, participation and follow-up. Test Analysts must be active participants in the review process, providing their unique views. They should have formal review training to better understand their respective roles in any review process. All review participants must be committed to the benefits of a well-conducted review. When done properly, reviews can be the single biggest, and most cost-effective, contributor to overall delivered quality. Regardless of the type of review being conducted, the Test Analyst must allow adequate time to prepare. This includes time to review the work product, time to check cross-referenced documents to verify consistency, and time to determine what might be missing from the work product. Without adequate preparation time, the Test Analyst could be restricted only to editing what is already in the document rather than participating in an efficient review that maximizes the use of the review teams time and provides the best feedback possible. A good review includes understanding what is written, determining what is missing, and verifying that the described product is consistent with other products that are either already developed or are in development. For example, when reviewing an integration level test plan, the Test Analyst must also consider the items that are being integrated. What are the conditions needed for them to be ready for integration? Are there dependencies that must be documented? Is there data available to test the integration points? A review is not isolated to the work product being reviewed; it must also consider the interaction of that item with the others in the system. It is easy for the author of a product being reviewed to feel criticized. The Test Analyst should be sure to approach any review comments from the view point of working together with the author to create the best product possible. By using this approach, comments will be worded constructively and will be oriented toward the work product and not the author. For example, if a statement is ambiguous, it is better to say I do not understand what I should be testing to verify that this requirement has been implemented correctly. Can you help me understand it? rather than This requirement is ambiguous and no one will be able to figure it out. The Test Analysts job in a review is to ensure that the information provided in the work product will be sufficient to support the testing effort. If the information is not there, is not clear, or does not provide the necessary level of detail, then this is likely to be a defect that needs to be corrected by the author. By maintaining a positive approach rather than a critical approach, comments will be better received and the meeting will be more productive.
Page 49 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Checklists used for the requirements, use cases and user stories generally have a different focus than those used for the code or architecture. These requirements-oriented checklists could include the following items: Testability of each requirement Acceptance criteria for each requirement Availability of a use case calling structure, if applicable Unique identification of each requirement/use case/user story Versioning of each requirement/use case/user story Traceability for each requirement from business/marketing requirements Traceability between requirements and use cases The above is meant only to serve as an example. It is important to remember that if a requirement is not testable, meaning that it is defined in such a way that the Test Analyst cannot determine how to test it, then there is a defect in that requirement. For example, a requirement that states The software should be very user friendly is untestable. How can the Test Analyst determine if the software is user friendly, or even very user friendly? If, instead, the requirement said T he software must conform to the usability standards stated in the usability standards document, and if the usability standards document really exists, then this is a testable requirement. It is also an overarching requirement because this one requirement applies to every item in the interface. In this case, this one requirement could easily spawn many individual test cases in a non-trivial application. Traceability from this requirement, or perhaps from the usability standards document, to the test cases is also critical because if the referenced usability specification should change, all the test cases will need to be reviewed and updated as needed. A requirement is also untestable if the tester is unable to determine whether the test passed or failed, or is unable to construct a test that can pass or fail. For example, System shall be available 100% of the time, 24 hours per day, 7 days per week, 365 (or 366) days a year is untestable. . A simple checklist for use case reviews may include the following questions: Is the main path (scenario) clearly defined? Are all alternative paths (scenarios) identified, complete with error handling? Are the user interface messages defined? Is there only one main path (there should be, otherwise there are multiple use cases)? Is each path testable? A simple checklist for usability for a user interface of an application may include: Is each field and its function defined? Are all error messages defined? Are all user prompts defined and consistent? Is the tab order of the fields defined? Are there keyboard alternatives to mouse actions? Are there shortcut key combinations defined for the user (e.g., cut and paste)? Are there dependencies between fields (such as a certain date has to be later than another date)? Is there a screen layout? Does the screen layout match the specified requirements? Is there an indicator for the user that appears when the system is processing? Does the screen meet the minimum mouse click requirement (if defined)? Does the navigation flow logically for the user based on use case information? Does the screen meet any requirements for learnability? Is there help text available for the user? Is there hover text available for the user? Will the user consider this to be attractive (subjective assessment)? Version 2012 - Beta
International Software Testing Qualifications Board
Page 50 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Is the use of colors consistent with other applications and with organization standards? Are the sound effects used appropriately and are they configurable? Does the screen meet localization requirements? Can the user determine what to do (understandability) (subjective assessment)? Will the user be able to remember what to do (learnability) (subjective assessment)? In an Agile project, requirements usually take the form of user stories. These stories represent small units of demonstrable functionality. Whereas a use case is a user transaction that traverses multiple areas of functionality, a user story is more isolated and is generally scoped by the time it takes to develop it. A checklist for a user story may include: Is the story appropriate for the target iteration/sprint? Are the acceptance criteria defined and testable? Is the functionality clearly defined? Are there any dependencies between this story and others? Is the story prioritized? Does the story contain one item of functionality? Of course if the story defines a new interface, then using a generic story checklist (such as the one above) and a detailed user interface checklist would be appropriate. A checklist can be tailored based on the following: Organization (e.g., considering company policies, standards, conventions) Project / development effort (e.g., focus, technical standards, risks) Object being reviewed (e.g., code reviews might be tailored to specific programming languages) Good checklists will find problems and will also help to start discussions regarding other items that might not have been specifically referenced in the checklist. Using a combination of checklists is a strong way to ensure a review achieves the highest quality work product. Using standard checklists such as those referenced in the Foundation Level syllabus and developing organizationally specific checklists such as the ones shown above will help the Test Analyst be effective in reviews. For more information on reviews and inspections see [Gilb93] and [Wiegers03].
Page 51 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Learning Objectives for Defect Management 6.2 When Can a Defect be Detected?
TA-6.2.1 (K2) Explain how phase containment can reduce costs
Page 52 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
6.1 Introduction
Test Analysts evaluate the behavior of the system in terms of business and user needs, e.g., would the user know what to do when faced with this message or behavior. By comparing the actual with the expect result, the Test Analyst determines if the system is behaving correctly. An anomaly (also called an incident) is an unexpected occurrence that requires further investigation. An anomaly may be a failure caused by a defect. An anomaly may or may not result in the generation of a defect report. A defect is an actual problem that should be resolved.
Page 53 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
defect type. Data should be recorded in distinct fields, ideally supported by data validation in order to avoid data entry failures, and to ensure effective reporting. Defect reports are written for failures discovered during functional and non-functional testing. The information in a defect report should always be oriented toward clearly identifying the scenario in which the problem was detected, including steps and data required to reproduce that scenario, as well as the expected and actual results. Non-functional defect reports may require more details regarding the environment, other performance parameters (e.g., size of the load), sequence of steps and expected results. When documenting a usability failure, it is important to state what the user expected the software to do. For example, if the usability standard is that an operation should be completed in less than four mouse clicks, the defect report should state how many clicks were required versus the stated standard. In cases where a standard is not available and the requirements did not cover the non-functional quality aspects of the software, the tester may use the "reasonable person" test to determine that the usability is unacceptable. In that case, the expectations of that "reasonable person" must be clearly stated in the defect report. Because non-functional requirements are sometimes missing in the requirements documentation, documenting non-functional failures presents more challenges for the tester in documenting the "expected" versus the "actual" behavior. While the usual goal in writing a defect report is to obtain a fix for the problem, the defect information must also be supplied to support accurate classification, risk analysis, and process improvement.
Page 54 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
In addition to these classification categories, defects are also frequently classified based on severity and priority. In addition, depending on the project, it may make sense to classify based on mission safety impact, project schedule impact, project costs, project risk and project quality impact. These classifications may be considered in agreements regarding how quickly a fix will be delivered. The final area of classification is the final resolution. Defects are often grouped together based on their resolution, e.g., fixed/verified, closed/not a problem, deferred, open/unresolved. This classification usually is used throughout a project as the defects are tracked through their lifecycle. The classification values used by an organization are often customized. The above are only examples of some of the common values used in industry. It is important that the classification values be used consistently in order to be useful. Too many classification fields will make opening and processing a defect somewhat time consuming, so it is important to weigh the value of the data being gathered against the incremental cost for every defect processed. The ability to customize the classification values gathered by a tool is often an important factor in tool selection.
Page 55 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Learning Objectives for Test Tools 7.2 Test Tools and Automation
TA-7.2.1 TA-7.2.2 TA-7.2.3 (K2) Explain the benefits of using test data preparation tools, test design tools and test execution tools (K2) Explain the Test Analysts role in keyword-driven automation (K2) Explain the steps for troubleshooting an automated test execution failure
Page 56 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
7.1 Introduction
Test tools can greatly improve the efficiency and accuracy of the test effort, but only if the proper tools are implemented in the proper way. Test tools have to be managed as another aspect of a well-run test organization. The sophistication and applicability of test tools vary widely and the tool market is constantly changing. Tools are usually available from commercial tool vendors as well as from various freeware or shareware tool sites.
Page 57 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
7.2.3.1 Applicability The return on investment for test execution tools is usually highest when automating regression tests because of the low-level of maintenance expected and the repeated execution of the tests. Automating smoke tests can also be an effective use of automation due to the frequent use of the tests, the need for a quick result and, although the maintenance cost may be higher, the ability to have an automated way to evaluate a new build in a continuous integration environment. Test execution tools are commonly used during the system and integration testing levels. Some tools, particularly API testing tools, may be used at the component testing level as well. Leveraging the tools where they are most applicable will help to improve the return in investment. 7.2.3.2 Test Automation Tool Basics Test execution tools work by executing a set of instructions written in a programming language, often called a scripting language. The instructions to the tool are at a very detailed level that specifies inputs, order of the input, specific values used for the inputs and the expected outputs. This can make the detailed scripts susceptible to changes in the software under test (SUT), particularly when the tool is interacting with the graphical user interface (GUI). Most test execution tools include a comparator which provides the ability to compare an actual result to a stored expected result. 7.2.3.3 Test Automation Implementation The tendency in test execution automation (as in programming) is to move from detailed low-level instructions to more high-level languages, utilizing libraries, macros and sub-programs. Design techniques such as keyword-driven and action word-driven capture a series of instructions and reference those with a particular "keyword" or "action word". This allows the Test Analyst to write test cases in human language while ignoring the underlying programming language and lower level functions. Using this modular writing technique allows easier maintainability during changes to the functionality and interface of the software under test. [Bath08] The use of keywords in automated scripts is discussed more below. Models can be used to guide the creation of the keywords or action words. By looking at the business process models which are often included in the requirements documents, the Test Analyst can determine the key business processes that must be tested. The steps for these processes can then be determined, including the decision points that may occur during the processes. The decision points can become action words that the test automation can obtain and use from the keyword or action word spreadsheets. Business process modeling is a method of documenting the business processes and can be used to identify these key processes and decision points. The modeling can be done manually or by using tools that will act off inputs based on business rules and process descriptions. 7.2.3.4 Improving the Success of the Automation Effort When determining which tests to automate, each candidate test case or candidate test suite must be assessed to see if it merits automation. Many unsuccessful automation projects are based on automating the readily available manual test cases without checking the actual benefit from the automation. It may be optimal for a given set of test cases (a suite) to contain manual, semiautomated and fully automated tests. The following aspects should be considered when implementing a test execution automation project: Possible benefits: Version 2012 - Beta
International Software Testing Qualifications Board
Page 58 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Automated test execution time will become more predictable Regression testing and defect validation using automated tests will be faster and more reliable late in the project The status and technical growth of the tester or test team may be enhanced by the use of automated tools Automation can be particularly helpful with iterative and incremental development lifecycles to provide better regression testing for each build or iteration Coverage of certain test types may only be possible with automated tools (e.g., large data validation efforts) Test execution automation can be more cost-effective than manual testing for large data input, conversion and comparison testing efforts by providing fast and consistent input and verification. Possible risks: Incomplete, ineffective or incorrect manual testing may be automated as is The testware may be difficult to maintain, requiring multiple changes when the software under test is changed Direct tester involvement in test execution may be reduced, resulting in less defect detection The test team may have insufficient skills to use the automated tools effectively Irrelevant tests that do not contribute to the overall test coverage may be automated because they exist and are stable Tests may become unproductive as the software stabilizes (pesticide paradox) During deployment of a test execution automation tool, it is not always wise to automate manual test cases as is, but to redefine the test cases for better automation use. This includes formatting the test cases, considering re-use patterns, expanding input by using variables instead of using hard-coded values and utilizing the full benefits of the test tool. Test execution tools usually have the ability to traverse multiple tests, group tests, repeat tests and change order of execution, all while providing analysis and reporting facilities. For many test execution automation tools, programming skills are necessary to create efficient and effective tests (scripts) and test suites. It is common that large automated test suites are very difficult to update and manage if not designed with care. Appropriate training in test tools, programming and design techniques is valuable to make sure the full benefits of the tools are leveraged. During test planning, it is important to allow time to periodically execute the automated test cases manually in order to retain the knowledge of how the test works and to verify correct operation as well as to review input data validity and coverage. 7.2.3.5 Keyword-Driven Automation Keywords (sometimes referred to as action words) are mostly, but not exclusively, used to represent high-level business interactions with a system (e.g., cancel order). Each keyword is typically used to represent a number of detailed interactions between an actor and the system under test. Sequences of keywords (including relevant test data) are used to specify test cases.[Buwalda01] In test automation a keyword is implemented as one or more executable test scripts. Tools read test cases written as a sequence of keywords that call the appropriate test scripts which implement the keyword functionality. The scripts are implemented in a highly modular manner to enable easy mapping to specific keywords. Programming skills are needed to implement these modular scripts. The primary advantages of keyword-driven test automation are:
Page 59 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
Keywords that relate to a particular application or business domain can be defined by domain experts. This can make the task of test case specification more efficient. A person with primarily domain expertise can benefit from automatic test case execution (once the keywords have been implemented as scripts) without having to understand the underlying automation code. Test cases written using keywords are easier to maintain because they are less likely to need modification if details in the software under test change. Test case specifications are independent of their implementation. The keywords can be implemented using a variety of scripting languages and tools. The automation scripts (the actual automation code) that use the keyword/action word information are usually written by developers or Technical Test Analysts while the Test Analysts usually create and maintain the keyword/action word data. While keyword-driven automation is usually run during the system testing phase, code development may start as early as the integration phases. In an iterative environment, the test automation development is a continuous process. Once the input keywords and data are created, the Test Analysts usually assume responsibility to execute the keyword-driven test cases and to analyze any failures that may occur. When an anomaly is detected, the Test Analyst must investigate the cause of failure to determine if the problem is with the keywords, the input data, the automation script itself or with the application being tested. Usually the first step in troubleshooting is to execute the same test with the same data manually to see if the failure is in the application itself. If this does not show a failure, the Test Analyst should review the sequence of tests that led up to the failure to determine if the problem occurred in a previous step (perhaps by producing incorrect data), but the problem did not surface until later in the processing. If the Test Analyst is unable to determine the cause of failure, the trouble shooting information should be turned over to the Technical Test Analyst or developer for further analysis. 7.2.3.6 Causes for Failures of the Automation Effort Test execution automation projects often fail to achieve their goals. These failures may be due to insufficient flexibility in the usage of the testing tool, insufficient programming skills in the testing team or an unrealistic expectation of the problems that can be solved with test execution automation. It is important to note that any test execution automation takes management, effort, skills and attention, just as any software development project. Time has to be devoted to creating a sustainable architecture, following proper design practices, providing configuration management and following good coding practices. The automated test scripts have to be tested because they are likely to contain defects. The scripts may need to be tuned for performance. Tool usability must be considered, not just for the developer but also for the people who will be using the tool to execute scripts. It may be necessary to design an interface between the tool and the user that will provide access to the test cases in a way that is organized logically for the tester but still provides the accessibility needed by the tool.
Page 60 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
8. References
8.1 Standards
[ISO25000] ISO/IEC 25000:2005, Software Engineering - Software Product Quality Requirements and Evaluation (SQuaRE) Chapters 1 and 4 [ISO9126] ISO/IEC 9126-1:2001, Software Engineering - Software Product Quality, Chapters 1 and 4 [RTCA DO-178B/ED-12B]: Software Considerations in Airborne Systems and Equipment Certification, RTCA/EUROCAE ED12B.1992. Chapter 1
8.3 Books
[Bath08] Graham Bath, Judy McKay, The Software Test Engineers Handbook, Rocky Nook, 2008, ISBN 978-1-933952-24-6 [Beizer95] Boris Beizer, Black-box Testing, John Wiley & Sons, 1995, ISBN 0-471-12094-4 [Black02]: Rex Black, Managing the Testing Process (2nd edition), John Wiley & Sons: New York, 2002, ISBN 0-471-22398-0 [Black07]: Rex Black, Pragmatic Software Testing, John Wiley and Sons, 2007, ISBN 978-0-470-12790-2 [Buwalda01]: Hans Buwalda, Integrated Test Design and Automation , Addison-Wesley Longman, 2001, ISBN 0-201-73725-6 [Cohn04]: Mike Cohn, User Stories Applied: For Agile Software Development, Addison-Wesley Professional, 2004, ISBN 0-321-20568-5 [Copeland03]: Lee Copeland, A Practitioner's Guide to Software Test Design, Artech H ouse, 2003, ISBN 1-58053-791-X [Craig02]: Rick David Craig, Stefan P. Jaskiel, Systematic Software Testing, Artech House, 2002, ISBN 1-580-53508-9 [Gerrard02]: Paul Gerrard, Neil Thompson, Risk-based e-business Testing, Artech House, 2002, ISBN 1-580-53314-0 [Gilb93]: Tom Gilb, Graham Dorothy, Software Inspection, Addison-Wesley, 1993, ISBN 0-201-63181-4 [Graham07]: Dorothy Graham, Erik van Veenendaal, Isabel Evans, Rex Black Foundations of Software Testing, Thomson Learning, 2007, ISBN 978-1-84480-355-2 Version 2012 - Beta
International Software Testing Qualifications Board
Page 61 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
[Grochmann94]: M. Grochmann (1994), Test case design using Classification Trees, in: conference proceedings STAR 1994 [Koomen06]: Tim Koomen, Leo van der Aalst, Bart Broekman, Michiel Vroon "TMap NEXT, for result driven testing", UTN Publishers, 2006, ISBN 90-72194-80-2 [Myers79]: Glenford J. Myers, The Art of Software Testing, John Wiley & Sons, 1979, ISBN 0-471-46912-2 [Splaine01]: Steven Splaine, Stefan P. Jaskiel, The Web-Testing Handbook, STQE Publishing, 2001, ISBN 0-970-43630-0 [vanVeenendaal12]: Erik van Veenendaal, Practical risk-based testing The PRISMA approach, UTN Publishers, The Netherlands, ISBN 9789490986070 [Wiegers03]: Karl Wiegers, Software Requirements 2, Microsoft Press, 2003, ISBN 0-735-61879-8 [Whittaker03]: James W hittaker, How to Break Software, Addison-Wesley, 2003, ISBN 0-201-79619-8 [Whittaker09]: James Whittaker, Exploratory Software Testing, Addison-Wesley, 2009, ISBN 0-32163641-4
Page 62 of 64
27 June 2012
Certified Tester
Advanced Level Syllabus - Test Analyst
9. Index
0-switch, 31 accessibility, 41 accessibility testing, 47 accuracy, 41 accuracy testing, 43 action word-driven, 58 action words, 59 activities, 10 Agile, 10, 15, 22, 33, 34, 43, 51, 61 anonymize, 57 applying the best technique, 39 attractiveness, 41 automation benefits, 58 automation risks, 59 boundary value analysis, 26, 28 breadth-first, 24 business process modeling, 58 BVA, 26 cause-effect graphing, 26, 30 centralized testing, 22 checklist-based testing, 26, 38 checklists in reviews, 49 classification tree, 26, 31 combinatorial testing, 26, 32, 44 combinatorial testing techniques, 31 combining techniques, 35 compliance, 42 concrete test cases, 8, 13 decision table, 26, 29 defect detection, 53 fields, 53 defect classification, 54 defect taxonomy, 26, 35, 36, 52 defect tracking, 21 defect-based, 35 defect-based technique, 26, 35 depth-first, 24 distributed testing, 22 distributed, outsourced & insourced testing, 22 domain analysis, 26, 34 embedded iterative, 10 equivalence partitioning, 26, 27 error guessing, 26, 37 evaluating exit criteria and reporting, 18 evaluation, 46 exit criteria, 8 experience-based technique, 26 experience-based techniques, 16, 26, 37, 39 exploratory testing, 26, 38 false-negative result, 17 false-positive result, 17 functional quality characteristics, 43 functional testing, 43 heuristic, 41, 46 high-level test case, 8 incident, 17 insourced testing, 22 inspection, 46 interoperability, 41 interoperability testing, 44 ISO 25000, 15, 42 ISO 9126, 15, 42 keyword-driven, 56, 58 keyword-driven automation, 59 learnability, 41 logical test case, 8 logical test cases, 13 low-level test case, 8 metrics, 12 N-1 switches, 31 non-functional quality characteristics, 43 n-switch coverage, 31 operability, 41 orthogonal array, 26, 32 orthogonal array testing, 26, 32 outsourced testing, 22 pairwise, 32 pairwise testing, 26 pesticide paradox, 17 phase containment, 52, 53 product risk, 20 product risks, 12 prototypes, 46 quality characteristics, 42 quality sub-characteristics, 42 questionnaires, 46 regression test set, 19 requirements-based testing, 26 requirements-oriented checklists, 50 retrospective meetings, 19 review, 46, 49 risk analysis, 20 risk assessment, 23 risk identification, 20, 23 risk level, 20 risk management, 20 27 June 2012
Page 63 of 64
Certified Tester
Advanced Level Syllabus - Test Analyst
risk mitigation, 20, 21, 24 risk-based testing, 20, 22 risk-based testing strategy, 15 root cause, 12, 54, 55 root cause analysis, 52, 55 SDLC Agile methods, 10 iterative, 10 software lifecycle, 9 specification-based technique, 26 specification-based techniques, 27 standards DO-178B, 16 ED-12B, 16 UML, 46 state transition testing, 26, 30 suitability, 41 suitability testing, 44 SUMI, 41, 46 surveys, 46 test analysis, 12 test basis, 14 test case, 14 test charter, 26 test closure activities, 19 test condition, 12 test control, 8 test design, 8, 13 test environment, 16 test estimates, 11
test execution, 8, 16 test implementation, 8, 15 test logging, 17 test monitoring, 20 test monitoring and control, 11 test oracle, 14 test planning, 8, 11 test plans, 11 test progress monitoring & control, 21 test strategy, 12, 14, 20 test suites, 15 test techniques, 26 testing software quality characteristics, 41 tools, 57 test data preparation tool, 56, 57 test design tool, 29, 56, 57 test execution tool, 56, 57 traceability, 12 understandability, 41 unscripted testing, 16 untestable, 50 usability, 41 usability test specification, 46 usability testing, 44 use case testing, 26, 33 user stories, 14, 15, 30, 33, 50, 51 user story testing, 26, 33 validation, 46 WAMMI, 41, 46
Page 64 of 64
27 June 2012