Sie sind auf Seite 1von 26

Project xxxx TEST STRATEGY

DOCUMENT NAME & LOCATION: DOCUMENT VERSION: DATE: READERSHIP: SUMMARY:

Amendment History Version V0.1 Date Comment By Approved

Associated Documents (This document should be read in conjunction with): Title of Document Version No/File Name Date

Approval Approver Project Manager Name Date

Page 1 of 26

CONTENTS

1. Introduction...................................................................................................................4 1.1 Context....................................................................................................................4 1.2 Purpose...................................................................................................................4 1.3 Scope to be Tested.................................................................................................4 1.4 Out of Scope (Not Tested)......................................................................................4 2. Testing Approch............................................................................................................4 2.1 Purpose...................................................................................................................4 2.2 Test Objectives.......................................................................................................4 2.3 Traditional Testing Approach .................................................................................5 2.4 Overview of Test Phases .......................................................................................5 2.4.1 Component (unit) Testing.................................................................................6 2.4.2 System Functional Testing...............................................................................6 2.4.3 End to End (E2E) Testing................................................................................6 2.4.4 Technical (non-functional) Testing...................................................................6 2.4.5 User Acceptance Testing (UAT)......................................................................7 2.4.6 Operational Acceptance Testing (OAT)...........................................................7 2.4.7 Regression Testing..........................................................................................7 2.5 Proposed Test Approach .......................................................................................7 2.5.1 Release Schedule............................................................................................7 2.5.2 Testing Schedule..............................................................................................7 2.6 Risk Approach.........................................................................................................8 3. Test Deliverables..........................................................................................................9 3.1 Testing Deliverables................................................................................................9 3.2 Detailed Test Plans.................................................................................................9 3.3 Test Scripts.............................................................................................................9 3.4 Test Progress Reporting.......................................................................................11 4. Test Management.......................................................................................................11 4.1 Resource Management.........................................................................................11 Assumptions and Dependencies...............................................................................12 5. Defect Management...................................................................................................13 5.1 Defect Management Approach.............................................................................13 5.2 Defect Status and Process...................................................................................13 5.3 Defect Severity......................................................................................................16 5.4 Defect Priority........................................................................................................16 5.5 Test Progress Reporting Metrics..........................................................................17 6. Test Tools...................................................................................................................18 6.1 Introduction...........................................................................................................18 6.2 Overview of Testing Tool......................................................................................18 6.3 Test Tool Requirement and description...............................................................18 APPENDIX A Example Testing Risk Log...................................................................19
Page 2 of 26

APPENDIX B Example Detailed Test Phase Description..........................................20 APPENDIX C Test Plan Contents..............................................................................22 APPENDIX D SamPLE Testing Roles and Responsibilities....................................24

Page 3 of 26

1.

INTRODUCTION

1.1

Context Project context

1.2

Purpose

This document sets the strategy for all testing within the scope of the project This document describes: the test approach test phases principles governing testing activities

The delivery of the solution and the overall business strategy are excluded from the scope of this document.

1.3

Scope to be Tested

The following key components (sub-systems) will be tested: All aspects of the non-functional requirements

1.4

Out of Scope (Not Tested)

The following features and attributes will NOT be tested:

2.

TESTING APPROCH

2.1 2.2

Purpose This subsection describes the testing approach that will be adopted by the project Test Objectives The test objectives are:

Page 4 of 26

To demonstrate that the solution meets all requirements To identify Defects (faults and failures to meet the actual requirements) with an agreed rectification plan To mitigate risk and demonstrate that the release is fit for purpose and meets user expectations.

2.3

Traditional Testing Approach The traditional approach to testing uses the "V" model, which maps the types of test to each stage of development as per the simplified diagram below:

User Requirements

User Acceptance Testing

Functional Specification

End to End Testing

System Design

System Functional Testing

Component Design

Component Testing

Component Build

It shows that for each requirement, specification or design documentation, there is an associated testing phase (i.e. Component design is associated with Component testing Where possible, testing should be carried out according to the V-Model approach using the Requirements Traceability Matrix as a key input to Test design and planning

2.4

Overview of Test Phases

List here the key phases of testing, eg: Component (unit) Tests System Functional Tests End to End Process (E2E) Tests Technical (Non-Functional) Tests User Acceptance Tests Operational Acceptance Tests

Page 5 of 26

Each Test Phase outlined below should be described, including the following details: Owner Objective of the phase Test Approach; execution, environments, data, resources & location Scope Exclusions Entry & Exit criteria Sign-off procedures Testing tools to be used

2.4.1

Component (unit) Testing

This is the testing that is carried out within the early stages of the development lifecycle: Describe here the key components and the Owners (eg THE CLIENT team, Vendor etc) that is responsible for testing the component

2.4.2

System Functional Testing System Functional Testing is the testing of the core functional areas of the system against the agreed requirements and technical documents. All the System Functional Tests to be carried out should be documented in the Detailed System Functional Test Plan to be produced before testing begins.

2.4.3

End to End (E2E) Testing Once all the functional areas have been successfully tested, the next phase of testing will be the End to End process testing. End to End (E2E) testing covers the testing of the full end-to-end processes - as defined in the process-model The key difference between the End to End Testing and the System Functional Testing is that in the E2E Testing we are primarily validating the process with the appropriate functions and not just the discrete functions All the E2E processes to be tested will be documented in the E2E Detailed Test Plan.

2.4.4

Technical (non-functional) Testing Technical (non-functional) testing will primarily cover Performance, Volume and Scalability of the solution. The testing will be based on the requirements, technical and process documents. Non-functional requirements should have been gathered in the Requirements Traceability Matrix. A range of test volume scenarios will be specified in the Non-Functional Testing Detailed Test Plan. The scenarios will be comparable with the expected operational volumes. A set of exceptional volume Tests will also be specified to demonstrate the robustness of the solution in exceptional volume conditions.

Page 6 of 26

A subset of these tests will also be executed (i.e. re-run) as part of the Operational Acceptance Testing (OAT)

2.4.5

User Acceptance Testing (UAT) User Acceptance Testing (UAT) is the testing that is conducted by the End User Representatives to ensure that the delivered system meets the user defined functional requirements. It is expected that the User Representatives will select a subset of tests from the System Functional and E2E test scripts. These tests will be documented in the UAT Detailed Test Plan by the Test Analysts in advance of the execution of the UAT. During the execution of UAT, the User Representatives will also be allowed an opportunity to carry out un-documented tests. Once the UAT tests are successfully completed, UAT can be signed off by the business team (and including the SPA)

2.4.6

Operational Acceptance Testing (OAT) Operational Acceptance Testing is the is the last major test phase and is executed on the final implemented solution to confirm that it can be supported and meet operational support requirements as agreed in the Support Model; Once these tests are passed, the solution can be promoted to operational status. If there are any unresolved priority 1 or priority 2 defects, the Application Manager may reserve the right not to accept the system into operational support.

2.4.7
-

Regression Testing Regression testing becomes necessary when: A new release or bug fix is delivered following the resolution of an Defect; Enhancements to the functionality are incorporated in the system; or The technical environment is altered.

Regression Testing is performed by re-running a selected set of the test scripts chosen according to the nature of the change. All test scripts will be designed to be re-run as necessary. ( please note that regression testing tends to carried out as part of the above phases and is not a separate testing phase on its own )

2.5

Proposed Test Approach

Outline here the likely sequence of testing:

2.5.1

Release Schedule

The following table outlines the delivery schedule of different code releases:

2.5.2

Testing Schedule

Outline below the proposed high-level schedule for testing:

Page 7 of 26

( a detailed test plan should be produced early in Execute)

2.6

Risk Approach

It is often impractical to perform a full exhaustive set of tests for a solution since this would be very costly in terms of both money and time and because the vendors should have tested their products prior to release to THE CLIENT. The objective is to optimise the testing resources and reduce test time without compromising the quality of the final solution.

Therefore, all test major test activities will carry risks, and an impact and likelihood analysis should be carried out to validate the choices being made List all key Testing risks below: .

Page 8 of 26

3.

TEST DELIVERABLES

3.1

Testing Deliverables This section details the type and structure of the test documentation that needs to be produced. The following is a list of documents that will be delivered as part of the testing activities: Detailed Test Plans Test Scripts for all test phases Testing Progress reporting

The following sub-sections provide an overview of each of the key deliverables.

3.2

Detailed Test Plans The core of each Detailed Test Plan is based on the requirements, design documentation and other non-functional criteria. The Detailed Test Plan will document the test method to be adopted for the testing of the Testing phase. The Detailed Test Plan should cover: System Functional Testing Technical (non-functional) Testing End to End Process Tests User Acceptance Testing Operational Acceptance Testing

Within the Detailed Test Plan, a full description of the following should be provided: the test environment all required test scripts test data interfaces (Integration) required.

Once the Detailed Test Plans have been approved, the test scripts can be documented.

3.3

Test Scripts A test script describes in detail how the test is conducted and what results are expected. A single test script may cover one or more requirements. However, typically a single requirement is broken down into sub-requirements/test conditions. This allows the Testers to show exactly how

Page 9 of 26

requirements have been covered by the test scripts and enables the Testing team to track issues related to specific test scripts. Each test script will detail the following: Test Name A unique reference number followed by the test name identifying the test Requirement cross reference - A reference to the requirement(s) and source documentation Revision History - with original, review and update details related to specific changes to the test Prerequisites reference to any scripts that need to be run before individual scripts can be executed. Test Description - A summary description of the purpose of the test Test Data The test data to be used Test Steps The instructions for running the test, e.g. the actions that need to be performed in order to exercise the piece of functionality being tested Expected Results A definition of the test results that expect to be observed if the test is successful. Enough information should be supplied to enable the tester to determine unambiguously whether or not the test has been passed Actual Results The Actual results that were observed and a reference to any test evidence. As a rule the tester will store evidence of the test results where possible. This will include a record of the build being tested, whether the test passed or failed and a list of any test observations raised Pass / Fail - A record of whether the test was passed or failed.

Page 10 of 26

3.4

Test Progress Reporting Progress reports will be produced at regular intervals (typically weekly).The report will show: Test Phase System Under Test Test environment No of total tests No of tests completed No of tests passed No of tests failed

Where appropriate, a detailed report highlighting all outstanding risks and potential business and/or operational impacts will also be produced.

4.

TEST MANAGEMENT

4.1

Resource Management The following is a list of all the key testing roles and core responsibilities that are required during the testing phase: Test Manager responsible for all project testing End to End) Test Manager responsible for the E2E Test activities Test Phase Team Lead responsible for input into the test phases Test Analyst responsible for documenting and executing the tests Technical Test Analyst responsible for technical tests

Depending on the scale and nature of the system (i.e. provided by an external vendor), it may be possible to combine all the roles so that combination of a Test Manager and Test Analysts should be able to fulfil all the testing responsibilities. List the key resources here: Role Organisation / team Name

Page 11 of 26

Assumptions and Dependencies Assumptions List here any assumptions e.g.

The vendors are responsible for fully testing their software before it is released to THE CLIENT. Vendors are available to review any test results and defects that the team feel may be associated with the product software It is expected that all users are on IE 7+. The project Business Analysts are available to input into the creation of the test cases. The test documentation will be created by the test analysts.

Dependencies List any key dependencies e.g : Build and component testing delivered on time and to a reasonable quality. (I.e. all entry criteria met and system is stable during the first week of test execution). Provisioning of the appropriate environments for each phase of testing. Utilisation and support of instances of the Test tool Service Level Agreements in place for performance testing. Service Level Agreements in place for the testing environments.

Page 12 of 26

5.

DEFECT MANAGEMENT Defect Management Approach Defect management requires the Testing team to document and track (i.e. with audit trail) all defects that have been raised, resolved and that remain open. Provides transparency across the project and management on defect status and priorities Defect Status and Process

5.1
-

5.2

The following table shows the statuses of a defect: Status Identified Assigned Fixed Released For Retest Closed Description A new incident is identified. An owner has been agreed and a fix is being created Development (i.e. Vendor) has a fix for the defect. When the fix is released (i.e. code drop by the vendor) for the test team to re-test Fix has been successfully tested or it is agreed no action is required.

All logged Defects should contain the following information: A unique identifier (defect number) Title for the defect Test Phase and test number that identified the defect System Area functional area this defect impacts (best estimate) The severity classification of the defect Estimated Fix Time - an estimated timescale for resolution (determining the impact on testing) A full description of the Defect and how to recreate the defect An indicator of the status of the Defect. Level of risk on Go-Live

Wherever possible, the description of the Defect will be written in non-technical terms or the impact of the Defect will be described in non-technical terms. Defects will be logged in the following situations: When the actual result does not match the expected result and the expected result is correct When an expected result does not match an actual result but the actual result is found to be correct. In this case the action will be to correct the expected result and the Defect log will provide an audit trail When there is an unexpected outcome to a test that is not covered by the expected result. This may result in the creation of a new entry in the requirement catalogue When a Defect is raised to which no immediate acceptable response is available.

Page 13 of 26

Once the project enters the System Test execution phase, typically each morning during test execution, the Testing Team will review all Defects raised since the previous meeting to determine any conflicts or impacts across the various phases of test. After each review session, the status of the defect will be updated and any re-testing of the defect fix and regression testing will be carried out under the guidance of the Test Manager.

Page 14 of 26

The following flow chart provides an overview of the Defect management process.

Raise Defect

Assign defect

Defect fixed

fail

Fix applied and re-tested

pass

Defect Closed

Page 15 of 26

5.3

Defect Severity

The table below describes the levels of defect severity Severity Description Entire system or key business process is unusable or does not meet the needs of the business, many users affected and no workaround is available; or, Corruption or loss of data occurs that is not immediately recoverable and prevents the business from continuing Part of the system of key business process is unusable or does not meet the needs of the business, few users affected but a workaround is available; or, Corruption or loss of data occurs that is immediately recoverable and allows the business to continue 3 - Medium A non-critical Defect occurs, typically affecting a single user. The Defect affects the ability to provide the best service, but there is a workaround Cosmetic errors, documentation anomalies, requests for information or advice required

1 - Critical

2 - High

4 - Low

5.4

Defect Priority

This table describes the levels of defect priority Priority Description of the Impact on the Testing Activity Incident that prevents all testing from continuing. All testing is suspended. Target resolution: within 4 hours Incident that severely impacts testing but testing is able to continue, possibly with a work around. Testing of particular function(s) is possibly suspended. Target resolution: within 24 hours Incident that inconveniences testing progress. Testing is able to continue without much impact. Testing of a single function is possibly suspended. A test script of procedure error that requires a fix. 3 - Medium Target resolution: within 3 days If this defect cant be resolved in the specified period, the level of risk on Go-Live will be assessed

1 - Emergency

2 - High

Page 16 of 26

Incident has little or no impact on testing progress. 4 - Low Target resolution: as agreed.

5.5

Test Progress Reporting Metrics

The Key Performance Indicator that will be used to measure the success of testing is: Test Execution: o o o o o Defects o o o o o o o o o Total defects raised (and by priority) Total defects fixed (and by priority) Total defects in progress (and by priority) Total defects closed (and by priority) Total defects by functional area Defect severity by Root cause Defect severity by application Defect severity by Defect Type Defect state by application Number of Planned Test Cases (total) Number of Planned Test Cases (Cum) Number of Passes Test Cases (Cum) Number of Failed Test Cases (Cum) Number of Test Cases in Progress (Cum)

Page 17 of 26

6.

TEST TOOLS Introduction

6.1

This section describes the types of tools that are required to manage the testing activities contained within this document.

6.2

Overview of Testing Tool

Describe here which tool is going to be used, and how it allows the user to organise and manage the testing activities. allows the user to catalogue the requirements specifies tests to be executed to validate the requirement allows the logging of the test results. Provides a reporting function that provides management reports and metrics on the testing progress.

6.3

Test Tool Requirement and description The following table shows the test tool(s) that will be used to support the testing activities:

Page 18 of 26

APPENDIX A EXAMPLE TESTING RISK LOG Re f 1. Risk Test environment availability for all required testing Resource constraints for test preparation and execution Prob H Impac t H Owner Test Manager Mitigation Ensure that Test Environments are documented and provisioned well in advance of the Test execution phase for each of the Projects in scope. Management plan resource requirements for both Test preparation and Test execution phases with sufficient time to secure additional resource where required. Advance notice of changes impacting on Any in-scope project can feed into any required reprioritisation. Contingencies to be considered for potential delays Where applicable Test harness will be created and managed by each distinct project but the harness should closely represent the source of target system. Ensure that a close relationship is maintained with external dependency partners and make provision for delays when encountered. Testing Team to ensure that all Test documentation is approved prior to commencement as this is a key part of the Entry Criteria to each Test phase. The System Integration Test releases will clarify this point but the sooner the solution components are Tested together the better.

2.

Project Manager

3.

Late changes in scope

Project Manager

4.

5.

6.

7.

Inter-dependencies between projects streams could hinder progress on a single deliverable required for test preparation or execution External Interdependencies with vendors with late delivery could severely hinder progress. Test documentation, Detailed Test Plans not approved prior to the scheduled Test start date Infrastructure components Tested in isolation may not fully prove the validity of the solution adopted

Project Manager

Project Manager

Test Manager

Test Manager

Page 19 of 26

APPENDIX B EXAMPLE DETAILED TEST PHASE DESCRIPTION System Functional Testing Item Accountability Responsibility Objectives Description Test Manager Test Manager The objective of System Testing is to: Approach Scope Verify that the whole system performs as described in the functional & technical specification documents.

Location: The System testing will conducted The Testing Team in conjunction with the users and Project team members define the scope of System Functional Testing. The following test types are in scope for this phase of the testing: Functional Testing Usability (User Interface) Security Testing Error handling Regression (if applicable) User performance (response time) Testing

Exclusions

The following exclusions will apply: Some interfaces may not be available to test against Penetration testing

Entry Criteria

The following entry criteria must be met before the commencement of System testing: 100% of agreed functionality has been delivered (subject to the functionality contained in the release being tested) Vendors have fully tested their developments and are formally delivering the software to THE CLIENT (this will include the installation of the software ) System Functional Test Plan has been reviewed and signed off by the agreed reviewers and approvers. This will primarily be the Project Team members System Functional Test Scripts completed and approved All components of the solution correctly configured in the System Test environment by the vendors . Any test data either pre-loaded or available to load as required Version, Release, Configuration and Defect Management tools and process defined and implemented

Page 20 of 26

Item

Description System Configuration documented and approved.

Entry Criteria will be assessed in the prior to test execution. Variances will be noted and documented by the Test Manager and System Test Team Lead in a report along with a risk assessment and recommendation to proceed. Where entry criteria have not been met the decision to proceed with test execution is up to the discretion of the: Exit Criteria IT&S Project Manager

The System Functional Testing is completed when: 100% of pre-agreed system test cases have been executed. All high priority test cases have passed successfully. All defects found are recorded. All severity 1 & 2 defects are resolved, retested and closed. All severity 3 & 4 defects outstanding have documented workarounds and an agreed (between business, development, testing teams and vendors) schedule of when they will be corrected. A set of pre-defined regression tests have been re-run after fixes applied and no new errors and/or Defects identified during regression tests Component tested code, executables and s/w configuration under version control Test Summary Report is completed and signed off

Sign-off

Completion is achieved when the exit criteria have been met. E-mail sign-off of the System Test, Test Summary Report (TSR) is performed by the Approvers outlined in the Testing Strategy and System Test Plan. The Test Manager is responsible for monitoring progress of the System Testing and ensuring all tests and results are documented. Test cases will be entered into Test Director or Excel & then executed. Actual results will be compared to expected results. Test Results (passed or failed) are logged in Test Director or Excel along with any defects found. Status reports will be prepared from Test Director or Excel.

Tools

Page 21 of 26

APPENDIX C TEST PLAN CONTENTS The Test Plan will have the following contents Test Plan Identifier Unique identifier associated to this Test Plan document Names and titles of all persons who must approve this Test Plan Objective of this Test Plan Background (summary of the project) Scope of the Testing phase this Plan relates to References to source material i.e. Project Plan, Configuration Management Plan, Policies & Standards Identification of the Test Items, including version/revision levels References to all source documentation e.g. Requirements Specification, Design Specification, Functional/Technical Specification, Technical Solution docs, User Guide, Operations Guide Identification all hardware features, and all software features and combinations of software features to be tested (descriptions of functionality and activities to be tested) Identification all features and combinations of features that will not be tested and the reasons Description of the overall approach to Testing. For each major group of features or feature combinations, specifying the approach that will ensure that these feature groups are adequately tested Specification of the major activities, techniques and tools that will be used to test the designated groups of features The approach will be described in sufficient details to identify the major testing tasks Identification of the techniques that will be applied to judge the comprehensiveness of the testing effort Lists of both the Entry and Exit Criteria for the Tests Any additional completion criteria e.g. error frequency The techniques to be used to trace requirements will be specified The criteria to be used to determine whether each test item has passed or failed testing, and the Severity / Priority assigned to each class of Defect The criteria to be used to suspend all or a portion of the testing activity on the test items associated with this Plan The testing activities that must be repeated, when testing is resumed Approvals Introduction

Test Items

Features to be tested

Features not to be tested Approach

Item pass/fail criteria

Suspension criteria and resumption requirements

Test Deliverables

Page 22 of 26

Identification of the deliverable Test documents i.e. Test Plan, Test Specifications, Test Scripts, Test Logs, Test Defect Reports, Test Completion Report, Test Data The set of tasks necessary to prepare for and perform testing (Task, predecessor, responsibility, effort, end date) Identification of inter-task dependencies and any special skills required Specification of both the necessary and desired properties of the Test environment The physical characteristics of the environment including the hardware, communications and system software, the mode of usage, and any other software or supplies needed to support the test The level of security that must be provided for all components of the Test environment, i.e. hardware, software and data Identification of the tools needed The office space and desks etc required for the Test team Identification of the groups responsible for managing, designing, preparing, executing, witnessing, checking and resolving Identification of the groups responsible for providing the Test Items and Environment Needs identified earlier Specification of the Test staffing by skill level Training requirements and options for providing necessary skills Test milestones identified in the Project plan Any additional Test milestones needed Estimates for the time required to perform each Testing task The schedule for Testing tasks and Test milestones Identification of the high-risk assumptions of the Test Plan

Testing Tasks

Environment needs

Responsibilities

Staff and training needs Risks NOTES: Within the Test Plan, a full description will be provided of the test environment, all required test scripts and harnesses, and all interfaces required with third party systems. Where the environment is not fully representative of the live environment, the reasons for the limitation will be provided and a risk assessment undertaken to determine the impact of this on the validity of the results obtained during the tests. The Test Plan will also specify the input data sources and any expected outputs, including volumes and types of data. An explanation will be provided for each data type and flow relating it to the predicted or measured live environment

Schedule

Page 23 of 26

APPENDIX D SAMPLE TESTING ROLES AND RESPONSIBILITIES The following table outlines the test team roles and their responsibilities: Testing Role Test Manager Responsibility/Accountable Responsible for producing the Test Strategy. Deliver the High Level Test Plan to be utilised for the delivery of detailed Test Plans Deliver Detailed Test Plan for all the respective test areas Recruitment of the Test Team (e.g. Test Analysts) Accountable for Phase Test Plans e.g. ST, UAT, OAT etc. Leading the end-to-end testing effort as outlined in this Test Strategy document (ST, UAT, OAT etc). Management of all testing resources. Testing management reporting Responsible for creating and maintaining the test project plan for all core testing activities (as baselined in MS Project) Responsible for ensuring the agreed delivery of all project testing deliverables (as baselined). Responsible for estimating, planning and ensuring appropriate level of resourcing for the project testing efforts. Responsible for managing all project testing related issues, risks and dependencies. Raising the above according to the agreed issues and risk management procedures. Responsible for ensuring the specified testing entry and exit criteria are met for ST, E2E, UAT, TECH. Main escalation point between testing and other teams i.e. business, Development, Test Phase Team Lead Provide input into the Test Strategy Responsible for providing input into estimating, planning and ensuring appropriate level of resourcing for the test phases. Create, maintain and ensure sign-off of the Test Plans. Lead the testing effort including: Delivery of the test cases/scripts, data and results. Ensure test artefacts delivered are stored correctly in Test Director or Excel Defect Management relevant to responsible test phase. Manage test preparation and execution risks, and issues. Create, maintain and ensure sign-off of the Test

Page 24 of 26

Testing Role Test Analysts (TA)

Responsibility/Accountable Summary Reports. Provide input into the Test Plans. Undertake testing activities ensuring these meet agreed specifications. Create, maintain and execute the test cases in Test Director or Excel. Devise, create and maintain test data. Analyse and store test results in Test Director or Excel. Raise, maintain and retest defects in Test Director or Excel) Provide input into the Test Summary Reports. Provide input into the Technical Test Plans. Undertake technical testing activities ensuring these meet agreed specifications. Create, maintain and execute the test cases in Test Director or Excel. Devise, create and maintain test data. Analyse and store test results in Test Director or Excel. Raise, maintain and retest defects in Test Director or Excel. Provide input into the Test Summary Reports. Provide business input into the Test Plans, test cases and test data. Execute test cases stored in Test Director or Excel. Analyse and store test results in Test Director or Excel. Raise, maintain and retest defects in Test Director or Excel. Provide input into the Test Summary Reports i.e. business workarounds and impact assessment. Provide solution details to Test Analysts Review detailed test plans produced by Test Analysts Input into and review test cases produced by Test Analysts Review and categories/priorities test results Validate, raise and progress defects to resolution Input into the test cases Review and sign-off the DTP and test cases/scripts Review of Test results Ownership of defects associated with the vendor solution Responsibility for issue resolution if associated with the vendor product/solution

Technical Test Analysts (TTA)

Business Analysts (BA)

Technical Lead (Team)

Vendors

Page 25 of 26

Testing Role

Responsibility/Accountable Assist in testing and defect reproduction for de-bug information purposes Deliver OAT Detailed Test Plan Delivery and reporting of OAT testing results and progress Management of the OAT environments Execute OAT tests Validate, raise and progress defects to resolution Sign-off OAT Input into the development of the User Acceptance test scripts Review and sign off User Acceptance Detailed Test Plans Review and sign off requirements and scripts User Acceptance test

Global Operations (GO)

Business User from THE CLIENT Legal

Agree acceptance criteria based on the successful completion of test execution Perform User Acceptance Testing Sign-off UAT

Page 26 of 26

Das könnte Ihnen auch gefallen