Sie sind auf Seite 1von 19

Master Test Plan [GMO Data]

Created from SDLC Template: Master Test Plan v1.2

Filename: Revision: Last Save Date: Author(s): File Location: Read Only (public) Link:

Test Plan CRM Online Staging.doc v1.1 Friday, March 23, 2012 Rajyalakshmi Sidda (v-rsidda) \\server\folder

Microsoft Confidential: This document is considered confidential to and is maintained as a trade secret by Microsoft Corporation. Information in this document is restricted to Microsoft authorized recipients only and any reproduction, distribution, or public discussion of this material is subject to the limits described in your non-disclosure agreement with Microsoft Corporation.

Table of Contents
1. Project Information ...........................................................................................................................................1 1.1. In Scope ......................................................................................................................................................1 1.2. Out of Scope ...............................................................................................................................................1 1.3. Change Control ...........................................................................................................................................1 Assumptions and Test Approach ...................................................................................................................2 2.1. Assumptions ...............................................................................................................................................2 2.2. High Level Test Approach ..........................................................................................................................2 Test Schedule ...................................................................................................................................................3 3.1. Milestones Tracking ....................................................................................................................................3 3.2. Test Milestones Defined .............................................................................................................................3 3.2.1. Milestone 0: Planning .........................................................................................................................3 3.2.2. Milestone 1: Test Cases Creation ......................................................................................................3 3.2.3. Milestone 2: System Testing (End-to-End) .........................................................................................4 3.2.4. Milestone 3: UAT/ Regression ............................................................................................................4 3.3. Escalation Patch Process ...........................................................................................................................4 3.4. Risks and Dependencies ............................................................................................................................4 3.4.1. Dependencies: ....................................................................................................................................6 3.4.2. Known Issues: ....................................................................................................................................6 3.5. Test Status Reporting .................................................................................................................................6 System Overview ..............................................................................................................................................6 4.1. Test Type Priority Levels ............................................................................................................................6 4.2. Features to be tested ..................................................................................................................................7 Bug Reporting Tools and Methods .................................................................................................................9 5.1. Bug Reporting Tool Strategy ......................................................................................................................9 5.2. Bug Closure Criteria ...................................................................................................................................9 Test Environment Needs / Setup ................................................................................................................. 10 6.1. Servers..................................................................................................................................................... 10 6.2. Databases ................................................................................................................................................ 10 6.3. Platform.................................................................................................................................................... 10 Release Criteria .............................................................................................................................................. 10 7.1. Test Pass/Fail Criteria ............................................................................................................................. 10 7.2. Pass / Fail Criteria ................................................................................................................................... 10 7.3. Suspension Criteria for failed Acceptance Test ...................................................................................... 11 7.4. Resumption Requirements ...................................................................................................................... 11 7.5. UAT Entrance Criteria.............................................................................................................................. 11 Extended Team Roles & Responsibilities ................................................................................................... 11 8.1. Roles ........................................................................................................................................................ 11 8.2. Test Lead / Test Manager........................................................................................................................ 12 8.3. Test Engineers ......................................................................................................................................... 12

2.

3.

4.

5.

6.

7.

8.

Appendix A: Document Change History ............................................................................................................. 13 Appendix B: Review and Sign-off ........................................................................................................................ 14 Appendix C: Glossary/ Definitions ...................................................................................................................... 15 Appendix D: Related Documents/References .................................................................................................... 16

V2 Test Planning

Page 2

1 of 19

1. Project Information
1.1. In Scope

The following features will be tested: Note: The Scope is based on the FSD provided by Anand.

CRMOnline Staging FSD.docx

Following Scenarios will be tested for the data coming from SOURCE: Boundary Value Testing Data Integrity Testing o Record Counts Validation o Duplicate Check o Blank value check Referential Integrity Testing o Orphan Records Check Scenario Based Testing

DISCLAIMER: Any scope changes Approved/CR approved by SM post Test plan sign-off; will impact overall test deliverables.

1.2.

Out of Scope

The following items are not in scope: 1. Anything which is not explicitly mentioned under the In-Scope section.

1.3.

Change Control

The rules of engagement for this process are: Any changes for the application after Baseline will be followed by a change request (CR) form. The product manager and the program manager will review change requests. The PM will involve both development and test leads in order to communicate change requests and get the appropriate ROMs and risks regarding the change. The rules of engagement for this process are: 1. PM maintains and communicates CR's.

V2 Test Planning

Page 1

2 of 19

2. PM will work with Dev/test/support for ROMs prior to approving it. Once ROMs are approved, the PM will assign it to the appropriate party and communicate intent to proceed to the entire project team. The PM will enter the CR into the Product Studio database. 3. All testers on the team will be required to review the approved CR's to ensure that test cases/scripts reflect reality or update/add test cases where necessary. 4. Any change request proposed after completion of System Test will require the approval of the Group Program Manager and Group Product Manager. (This may need to change as our organizational structure changes, but the intent is that someone of higher authority needs to approve the impact that late in the cycle) This change needs to have a CR number and clearly documented in the FS prior to testing for the same in SIT.

2. Assumptions and Test Approach


2.1. Assumptions

In preparation of this Test Plan and in planning for testing success, following assumptions are made: Test Environment for GDW V2 is available at all time during the testing phase for all the Drops every time a patch/Drop application completes without any alterations in test data by external entities during test execution. The SIT tests shall be carried out in the performance environment and in the interest of schedule all the components should be up and running from the time test execution begins till the end. The entire test environment setup is configured properly post build/patch application and is stable satisfying entry criteria for all the test cases. Test environments have valid test data post application of Drops/Patches coming from upstream sources which will be used to carry out testing. Functional Spec is frozen post baseline and any changes made to GDW V2 design need to be reflected in duly signed off and communicated within the functional specification by the System Analyst/Solutions Manager during the test case preparation, and are to be called out in a formal change request. Tech Spec is signed off: No schema changes during testing phase of project life cycle. This will be added as separate User Story. The regression/release level test cases set aside by the team shall be satisfying the complete coverage for the entire system for the release and shall be duly sign-off by the respective stakeholders to confirm ETE coverage. All the expected database objects (tables/views/jobs and its dependencies like DTS/SSIS/packages/Stored procedures) presence is mandatory in every Drop/Patch being applied without which the BVT for that build will fail. Also all those object should be present throughout the testing cycle in that particular environment till the test team signs-off on the same.

2.2.
1. 2. 3. 4. 5.

High Level Test Approach


Dev/Support environment readiness BVT should be completed before dropping it for SIT. Separate SIT BVT scripts to be run post build confirmation from step 1 by Dev. Build will be accepted for testing after SIT BVT is successful. Priority 1 and 2 test cases will be executed first and depending on the time the Priority 3 test cases will be executed. Priority Test Case Type Page 2

V2 Test Planning

3 of 19

BVT, Schema Checks, Job runs, Tables and columns existence, key columns validations, duplications check, Join Conditions, Output table population, ETE waterfall testing. Data derivations checks Non key columns matching with source DTDS tables, indexes Extended functionalities data TAH population after every run, truncation of GDW V2 data mart waterfall tables after every runs, negative testing scenarios.

6. Source database in the SIT environment should be build using latest production dump data (TBD). 7. SIT environment set up should vis--vis production.

3. Test Schedule
3.1. Milestones Tracking
Milestone Targeted Date Actual Date

Test Resource Identified Master Test Plan Complete Test Plan Sign Off Test Cases and Test Scripts Sign off

3.2.

Test Milestones Defined

3.2.1. Milestone 0: Planning


The test manager or test lead is involved at this milestone to provide high level test ROMs based on the BRD delivered to the team by the Product Manager. The Test Manager or Test Lead will begin working on the requirements for the Master Test Plan Working with the PM and the Development Manager. The team should be assembled based on the needs discovered during meetings with the PM. Once the team is assembled, the testers with the test lead should begin building familiarizing themselves with the project documents, build preliminary test cases and build use cases and models if time permits. The deliverable expectation out of Milestone 0 will be test cases and test script shells, which are step by step validation scripts that the testers will use during SIT that get written and refined during integration testing.

3.2.2. Milestone 1: Test Cases Creation


This is the longest phase of the process. Integration testing and the test cases are completed during this phase. As functionality becomes available during development, the testers test as standalone pieces and refine their test cases accordingly. The goal of the integration test period is to validate that functionality as defined in the functional specification is what has been implemented earlier rather than later. Also during this period, as more code and functionality are integrated, more regression testing can happen to minimize surprises during the System Test period. During integration test, Testing and Development can negotiate for new builds. The test team will test each piece of functionality as it becomes available from the development group. Any showstoppers that are found during this period are put into Product Studio and must be fixed prior to the next build. All other bugs found during integration testing will be tracked in Product Studio and will be

V2 Test Planning

Page 3

4 of 19

regressed prior to System Test. Bug triage will happen with the test lead, developer lead and PM weekly. Priority will be set on bugs and solutions and workarounds discussed.

3.2.3. Milestone 2: System Testing (End-to-End)


ex 1: During this phase the product will be fully tested in an end-to-end fashion. It is divided into test passes. {Project Name} consists of {X} full test passes, a configuration pass, and several partial test passes, as needed (approximately {x} weeks overall). System test ends when the team agrees that the product is ready for release to UAT. The release from test document will be submitted by the Test Lead to the project team confirming that the test team has completed the milestone and highlighting release issues. ex 2: The goal of this testing phase is to verify the quality of the entire product. After code complete, the system test team will perform a wide range of tests. Testing will run a full functional scenario script pass, two full test passes of approximately one week each, ad hoc testing, a regression pass, and various focused testing passes, as needed. The result is a six-week testing period after code complete. Status reporting during this period will be focused on bug count, script completion status and overall stability of the builds to the project review committee.

3.2.4. Milestone 3: UAT/ Regression


This is the final phase of testing prior to implementation. The SIT test team will be retained to regress any remaining bugs from SIT in addition to reproducing and regression testing any issues discovered during UAT. For secondary releases, the test team may want to run in depth regression tests in addition to the standard BVT on functionality from previous releases. This ensures that the new feature set does not impact the previous codebase.

3.3.

Escalation Patch Process

The escalation path process that GMO Data will employ will be different for each Test Phase: Integration Testing: Only severity 1 & Severity 2 bugs or Priority 1 and Priority 2 and high priority severity 3 issues will be addressed during this phase. The escalation process will be to enter the bugs into TFS and have PM / Dev/Test triage once a week. System Test: Severity 1 and Severity 2 & Priority 1 and Priority 2 that impact the Testers from completing the test will be informed immediately requesting for immediate remediation (patch). This alerts test and whoever else is in the environments of possible downtime. This will also include the blocked test case. Post Production: All the post production defects will be logged into TFS and will be tracked and followed by PCR process approved by CCB. The escalation path for all other issues in the project is: For all Dev issues: Dev Team Yogesh Goswami Godwin Suares Payal Gupta For all Test issues: Test Team Ankur Goel Yogesh Goswami Payal Gupta

3.4.

Risks and Dependencies

V2 Test Planning

Page 4

5 of 19

Risk
Release Schedule offers minimal contingency

Probability
50%

Severity
High

Contingency Plan
Mitigation: monitor progress and identify risks and issues immediately. We look for more contingencies as we move forward in execution. Management must be well disciplined in controlling scope. Follow the change control process. No changes should be introduced after baseline.

Additional/new requirements

25%

Low

Test Environment Stability

75%

High

Dependency on other projects

50%

Medium

Test Environment Readiness

25%

Medium

Work with Dev/Support team to reduce system down time and environment failures that lead to slow downs or malfunctions and to ensure necessary access. Communicate escalation procedures for system down issues to all team members. Work with Release PM to address overlapping project timelines to ensure a project plan that addresses test schedule needs If the test environment is not ready by the time the first drop is ready, test execution will be delayed. Support team manages SIT environments but not client machines; these will be the responsibility of test engineers. SIT testing will be affected if availability of test servers go down a bit too often. In that case, test cases will take long time to complete or fail to complete successfully. The Drop/patch going to any environment as a part of the release is dependent on the code base on which its deployed and the build will not be successful if the codebase contains all the dependency of the Drop/Patch incorporated. This will have high impact on the test execution schedule and will impact the deliverables as ROMs will fluctuate a lot based on that. Page 5

Long runtime of build dependent jobs

95%

High

Codebase on which build is deployed will nullify regression testing

20%

Medium

Change in Scope post baseline

40%

High

V2 Test Planning

6 of 19

Risk
Functional Spec

Probability
30%

Severity
High

Contingency Plan
If the specs are not stable or not defined properly enough on time for test planning and execution, then it is a risk to test execution

3.4.1. Dependencies:
GMODW V2 SQL Server should have valid data which shall enable smooth execution for all test cases covered. We should have read only access to the GMODW V2 DB and other required dbs to take out dumps for testing purpose. The environment readiness should be there from GMO DW V2 side and all jobs should be completed for the testing time frame which we are targeting and this needs to be clearly communicated.

3.4.2. Known Issues:


Note: During the execution of the project, any planned or unplanned risks which is not mitigated, it will be addressed as known issues and will be followed with MS/ACN management and will have a sign off on the system test plan after making changes to the same.

3.5.

Test Status Reporting

The test team will report status daily during test case creation and System / integration testing. During the testing phase bug status will be reported daily in VSTF and test cases execution status will be reported through TFS. There will be daily scrum meetings where the priority will be decided based on the business needs.

4. System Overview
Share point folder called GMOBI is used for all reporting platforms. As of today. Net/Silverlight applications are running on GMOBI environment. Toulouse, Data card, etc. are on GMOBI. In FY11 Toulouse will be converted into Data Warehouse. There will be a multiple source system, for example, if GDW V2 requests a new Data source from Siebel, This will be pulled from DWH. Microsoft uses the third party vendor for processing rental data. Since they have an agreement they can't store and process the rental data in GMO data warehouse. Camprod is a data mart, which is nothing but environment provided to an end user for ad-hoc queries.

4.1.

Test Type Priority Levels

Below is a table illustrating types of testing that will be provided and the degree of testing that will be performed by the Test Team. The following is a key to rating each test type based on project expectations. V2 Test Planning Page 6

7 of 19 Test Type Data / DB Integrity

High High risk area, test this area very hard Medium Standard testing Low Low risk area, test if time allows None No testing desired Definition Data integrity is an area that is not always visible, but would impact transaction processing or reporting if data is incorrect. Thus, absent a UI, the test team will be compelled to author the test tools which will enable the validation of that background data. This is typical of a SQL heavy application. Assumption: Conversion testing is used to test any data that must be converted to ensure the application will work properly. This could be conversion from a legacy system or changes needed for the new schema Requirement: Ensure proper target-of-test functionality, including navigation, data entry, processing, and retrieval. Many applications are now using XML to input and output data. Thus, any testing that requires to validate XML schemas or XML interfaces should be listed here Ensure proper target-of-test and background processes function according to required business models and schedules End to End testing is testing all inputs (super-systems) and outputs (subsystems) along with the application. A controlled set of transactions is used and the test data is published prior to the test along with the expected results. This testing ensures that the application will interact properly with the other systems. Automated testing can be used to automate regression and functional testing. This can be very helpful if the system is stable and not changed often. If regression must occur for functional areas that are not being changed, specify the functional areas to regress and the level of regression needed. Level Desired High

Conversion testing

Medium

Functional Testing

High

Use Case End to End

Medium Medium

Automated Testing Regression of unchanged functionality

Low

Low

4.2. Features to be tested


1) Boundary Value Testing(Refer to the excel sheet for BVT requirements) a. Validate if the following tables are created in the database. 1. 2. 3. 4. 5. 6. 7. 8. 9. Account Contact CRMBIVIEW CRMBIVIEW_EMPCOUNT CRMBIVIEW_MANAGED_TRIAL_LEAD CRMBIVIEW_UNMANAGED_TRIAL_LEAD Dst1_livesubscription Opportunity Lead

V2 Test Planning

Page 7

8 of 19 10. CRMOnlineLog 11. CRMOnlineDeltaLog b. Validate if all the specified columns are present in the above mentioned tables. c. Validate if the tables are created with the required datatype, length and column constrains as specified in the FSD. d. Validate if the below columns should and must be there in the structure of all the tables for all the first 9 tables mentioned above.

COLUMN NAME
CurrentMember StartDate

DATATYPE/CONSTRAINT
bit Not Null SmallDateTime Not Null SmallDateTime Null
Structure for all the tables..xls

End Date

2) Data Integrity Testing(For the first 9 tables mentioned in BVT) a. Record Counts Validation Count comparison between the qualified records from source tables (from server MBSCRMDEV03) and the records in target table (from server GMODEVDWSQL03) b. Duplicate Check Validate if any duplicate records are existing in the above mentioned tables. c. Data validation Validate if all the qualified records present in the source are present in target tables by using an except query between source and target and also vice versa.

3) Scenario Based Testing I. Testing for SCD2

a. When a new record is inserted in the Source , the same record should get populated in the destination with Currentmember = 1.It will be validated by manually inserting a record in the source and checking the same in target after running the package. b. When a record gets updated in the source, the record should get inserted with the updation with Currentmember = 1 and the Currentmember should be 0 for the old record. It will be validated by manually updating a record in the source and checking the same in target after running the package. c. When the record gets deleted in the source , the Currentmember for that particular record should be zero in the destination. It will be validated by manually deleting a record in the source and checking the same in target after running the package. d. Validate if Currentmember is 1 for all the active records and 0 for all the inactive records. II. Testing for Delta load in the tables.

e. StgCreateDate should possess datetime value for all the inserted records, it should not be blank or null. f. StgUpdateDate should possess a datetime value only when the record is updated else it should be NULL. g. Validate if delta load is being done for all the 5 tables mentioned below.It will be verified by checking the StgCreateDate and StgUpdateDate for existing records and new records.

V2 Test Planning

Page 8

9 of 19 1) 2) 3) 4) 5) Account Contact DST1_LiveSubscription Lead Opportunity

5. Bug Reporting Tools and Methods


5.1. Bug Reporting Tool Strategy

We shall be using TFS for bug reporting. Priority Description Highly likely to affect User. Must fix ASAP. Usually a bug that is blocking test from further testing; Any bug in GDW V2 preventing data from going through the waterfall including the final output step We cannot enter UAT with P1 bugs If the bug is originating the DWH it is not a GDW V2 bug Medium likeliness to affect Users. Should fix soon before product release; Any bugs that affects data in the final output (i.e. mapping incorrect columns, masking issues, missing data) We can enter UAT but we cannot do live Low likeliness to affect Users. Fix if time; somewhat trivial, may be postponed. Bugs that are currently in production that do not match the original BRD We will go live and fix these post production. Description Impact to User is high data loss and/or crashing bug; & definition as per PS. Impact to User is medium major functionality or other severe problems; product crashes in obscure cases; & definition as per PS. Impact to User is low minor functionality problems; may affect fit and finish; & definition as per PS. Bug contains typos, unclear wording or error message in low visibility fields; & definition as per PS.

Priority 1

Priority 2

Priority 3

Severity Severity 1 Severity 2 Severity 3 Severity 4

5.2.

Bug Closure Criteria

Regression testing will be performed to ensure that all closed bugs are still resolved and have not been recreated by new system drops. If the bug is not fixed or is broken again then it will be reopened again and updated with any new details.

V2 Test Planning

Page 9

10 of 19

6. Test Environment Needs / Setup


6.1. Servers

Detail the servers and IIS boxes needed for this test effort. Machine Name
MBSCRMDEV03 GMOSTG01

Purpose
(Source) (Destination)

6.2.

Databases

Detail the databases needed for this test effort. Machine Name
MBSIT_MSCRM CRMONLINE

Purpose
(Source) (Destination)

6.3.

Platform

The primary platform of the GMODW V2 Data Testing Server will be:
Software Version

Operating System IPAK Internet Explorer SQL Server

Windows Server Enterprise SP2 TBD Internet Explorer 8 SQL Server 2008

7. Release Criteria
7.1. Test Pass/Fail Criteria

The product will pass or fail depending upon the results of testing actions. If the actual output from an action is equal to the expected output specified by a test case, then the action passes. Should any action within a test case fail the entire feature or sub-feature fails. If a test case fails, it is not assumed that the code is defective. A failure can only be interpreted as a difference between expected results, which is derived from project documentation and actual results. There is always the possibility that expected results can be in error because of a misinterpretation of project documentation, incomplete documentation, or inaccurate documentation.

7.2.

Pass / Fail Criteria

Individual Test Case Pass/Fail Criteria is defined, keeping in mind the following: Actual results equal expected results V2 Test Planning Page 10

11 of 19

For any benchmarks provided by the business analysts and documented by development, all processes will finish update/execution in the specified amount of time. If a test case fails and a bug or issue is logged, communication of this action to the project is necessary to keep the group informed of progress and how issues encountered by one application may affect the test plans for downstream systems.

7.3.

Suspension Criteria for failed Acceptance Test

The system test team may suspend partial or full testing activities on a given build if any of the following occurs: Files are missing from the new build. The PM Designate cannot install the new build or a component. The PM Designate cannot configure the build or a component. There is a fault with a feature that prevents its testing. Item does not contain the specified change(s). A severe problem has occurred that does not allow testing to continue. Development has not corrected the problem(s) that previously suspended testing. A new version of the software is available to test.

7.4.

Resumption Requirements

The steps necessary to resume testing: Clean previous code from machines. Re-install the item. The problem encountered resulting in suspension is corrected. Resumption of testing will begin when the following is delivered to the system test team: A list of all bugs fixed in the new version. A list of all the changes to the modules in the new version and what functionality they affect.

7.5.

UAT Entrance Criteria

The UAT Entrance criteria necessary to allow the code to migrate to User Acceptance Testing are below and should be the same as the release criteria agreed upon at baseline and outlined below: There are no open bugs with a severity 1 or 2 in Active or Resolved state There are no open bugs with a priority 1 or 2 in Active or Resolved state Test cases scheduled for System test phase have passed. Release to UAT Report (listing outstanding issues) completed and signed by System Test Manager.

8. Extended Team Roles & Responsibilities


8.1. Roles
Resource Yogesh Goswami (v-yogosw) Ankur Goel (v-angoel) Tanay Anand (v-tannan) Chamandeep Singh Bedi (v-chsin) Rajyalakshmi Siddam (v-rsidda) Role GMO Data Test Lead GMO Data Sr. Test Engineer GMO Data Sr. Test Engineer GMO Data Test Engineer GMO Data Test Engineer Occupancy 100% 100% 100% 100% 100%

V2 Test Planning

Page 11

12 of 19

8.2.

Test Lead / Test Manager

A Test Lead is responsible for the overall coordination of the test processes & dedicated to the overall test effort for the duration of the project. This person will be responsible for the following specific area(s) of testing: Work with the Project Manager to define the Test ROM & Master Test Plan Provide the completed test ROM, System Test Plan to the Project Manager Coordinate all status review meetings with the PM. Consolidate overall weekly test status updates and provide the same to the Project Manager Track the bugs which have been reported in Product Studio Ensure the scrum meeting participations are happening daily. Ensure QA engineers are effectively communicating with their development counterparts Ensure the creation of test cases and test scripts Ensure the test engineers are entering all bugs into TFS. Ensure the execution of these test cases by responsible Test Engineer(s) Unblocking the test execution in case of any eventualities either environmental or otherwise. Raising risks/issues in case all the attempts to unblock the execution have failed.

8.3.

Test Engineers

Test engineers are responsible for performing requisite Integration, System & Regression Testing. They are responsible for the following: Prepare the test cases/scripts. Execute all identified test cases Provide the test status updates to the Test Lead according to the defined schedule Create bugs as appropriate according to the issued guidelines. Immediately follow up with Test Leads and Developers on any blocking and critical bugs.

V2 Test Planning

Page 12

13 of 19

Appendix A: Document Change History


Version No. 1.0 Date 05-30-2011 Name (Alias) Rajyalakshmi Siddam(v-rsidda) Description of Change Test Plan Creation for CRM online staging

V2 Test Planning

Page 13

14 of 19

Appendix B: Review and Sign-off


Below is a list of the project team members and required reviewers and as distinguished from approvers.
Person Anand Role BSA Contact Reviewed Date

V2 Test Planning

Page 14

15 of 19

Appendix C: Glossary/ Definitions


Term BVT GMO CPE CR E2E LIR PM PS QA ROM SAB SIT UAT VSTF Definition Build Verification Testing Global Marketing Operations Customer/Partner Experience Change Request End to End Licensing Information Repository Program Manager Product Studio Quality Assurance Rough Order of Magnitude Software Assurance Benefits System Integration Testing User Acceptance Test Visual Studio Team Foundation

V2 Test Planning

Page 15

16 of 19

Appendix D: Related Documents/References


Document Document Location

Functional Specification

FSD attached at the start of this Test Plan.

V2 Test Planning

Page 16

17 of 19

V2 Test Planning

Page 17

Das könnte Ihnen auch gefallen