Beruflich Dokumente
Kultur Dokumente
Cognizants Findings
Review& Sign-off Templates, Checklists, etc Risk Management Knowledge management Requirement Change Management Build/Release Management Test Artifacts Configuration Test Measurement Strategy
The effectiveness and efficiency of testing is not measured adequately to initiate corrective and preventive actions
3
Non availability of tools to capture and gather data for metrics analysis. Only basic metrics captured at project level (such as Voice
of Customer; Defect per Severity, Category, Priority; Defect Status).
4 5 6
7
4
Proactive management rather than reactive management. Helps in achieving operational/process excellence through identification of areas for improvement. Embed Predictability (into estimation process & calibration). Dashboard to Sr. management/stakeholders providing the performance view of the team and projects.
Metrics Voice of Customer Test Execution Productivity Cost of Quality Automation ROI Defect Seepage to UAT Defect by Severity Defect by Category Schedule Variance Test Design Productivity Review Effectiveness Requirement Traceability Index Test Case Effectiveness Defect Seepage to SIT Defect Rejection Ratio Defect Ageing by Severity Test case planned vs. executed
Not Implemented Partial Implemented Implemented
Sales & Service Order Order OSS Tools Marketing Management Management provisioning
Desired State
1 Develop an effective Metrics Process.
4 Advanced test management and industry leading practices with continuous process improvement and innovation 3 Enterprise-wide testing framework with measurable processes, predictive capabilities & sharing of best practices Performing Defined test processes but less effective with basic test management practices 1 Functional
Establish efficient and robust Metrics Model. Tools changes to capture and gather data for metrics analysis.
2
Identification of metrics both at Project & TSID level. Identification of benchmarks Industry and/or Organization level. Initiation of Data capture and metrics analysis. Continue Improvement Cycle through Metrics Analysis. Publish the MIS reports/dashboard for the stakeholders/senior management. 8
Maturity:
5
6 7
Completely people dependent with very minimal or no test management and without consistent processes
Inception
Functional
Performing
Inception
Best-in-Class
Best-in-Class
Metrics Voice of Customer Test Execution Productivity Cost of Quality Automation ROI (as applicable) Defect Seepage to UAT Defect by Severity Defect by Category Schedule Variance Test Design Productivity Review Effectiveness Requirement Traceability Index Test Case Effectiveness Defect Seepage to SIT Defect Rejection Ratio Defect Ageing by Severity Test case planned vs. executed
Not Implemented Partial Implemented Implemented
Sales & Service Order Order OSS Tools Marketing Management Management provisioning
10
PHASE 3
Define Organization Benchmarks Statistical Analysis of Metrics Introduction of Change of process to include metrics related to Post Production efficiency Causal Analysis & Continuous Improvement Cycle
PHASE 2
1. Collection of Metrics Analysis of Metrics Refine & Improve Strategy, if required Trend Analysis Causal Analysis and improvements fed into the system 4. 2. 3.
PHASE 1
1. Define Metrics Strategy/Process Time box Strategy Collection & Analysis Mechanism 2. Identify Metrics
2. 3. 4. 5.
3. Identify initial Benchmarks 4. Identify Data Collection Mechanism Changes in Tools 5. Identify Roles & Responsibilties
INCEPTION
FUNCTIONAL 1 3 Months
PERFORMING 4 6 Months
11
Change Category
Change in Tool Change in Tool Institutionalization of Process Change in Process
Change Description
PPM (ATLAS) For accurate Time recording Quality Centre For Defect Model Standardization Metrics Process To define the Metrics Model and strategy Other relevant process which impact Metrics (Estimation process, Test Execution Process, Defect Management process etc.) Industry Benchmarks to be taken to measure our performance against
Needed in Phase
Phase 1 Phase 1 Phase 1 Phase 1
Phase 1
12
Change Description
Introduction of Metrics Manager Role To perform Metrics Analysis at TSID level To develop the performance benchmarks relevant to our organization (this will be done after we have data of 6 + months) Statistical Analysis of Metrics Introduction of metrics related to Post Production efficiency
Needed in Phase
Phase 2 Phase 3
8 9
Phase 3 Phase 3
13
Appendix
14
15
I4 Metrics Model 1 of 2
Identify
Improve
Implement
Investigate
16
I4 Metrics Model 2 of 2
Identify Define Metrics Strategy What to measure & How to measure Roles & Responsibilities Capture Metrics & Measurement Using Tools
Implement
Identify
Investigate
Improve
Implement
Improve
Investigate
17
Types of Metrics
Types of Metrics
Process Metrics
Identify
Improve
Implement
Investigate
Small Projects
Large Projects
Ongoing Enhancements
18
KPI
Average of individual ratings from the feedback form # of Test cases executed per day per tester [(Appraisal Effort + Prevention Effort + Failure Effort) / Total Project effort] * 100 Total benefit derived from automation / Total cost of automation
NO
YES [Effort tracking in PPM needs change] YES [Effort tracking in PPM needs change]
Automation ROI
NOTE: These are few initial proposed metrics. List will be refined 19
Proposed Metrics 2 of 6
KPI
# of Defects seeped to UAT # of Defects per Severity # of Defects per Category
YES [Quality Centre to be streamlined for Severity Values] YES [Quality Centre to be streamlined for Category Values]
NOTE: These are few initial proposed metrics. List will be refined 20
Proposed Metrics 3 of 6
Project Level Metrics
PROCESS METRICS Metrics
Schedule Variance
KPI
[(Actual End Date Estimated End Date)/ (Estimated End Date Estimated Start date)] X100 # of Test cases created per day per tester # of Test cases executed per day per tester Internal vs external Review feedback of Test cases
Monitor the test design performance and thereby improving the same Monitor the execution performance and thereby improving the same Evaluate the efficiency of finding defects during reviews
NO
NO
NO
NOTE: These are few initial proposed metrics. List will be refined 21
Proposed Metrics 4 of 6
Project Level Metrics
PROCESS METRICS Metrics
Requirement Traceability Index Test Case Effectiveness
KPI
Requirements mapped to Test Scenarios / cases (Total Defects - Defects not mapped to Test cases) / Total # test cases ) x 100
NO
NOTE: These are few initial proposed metrics. List will be refined 22
KPI
# of Defects seeped to SIT from earlier phases # of Defects seeped to UAT # of Defects per Severity
Evaluate the number of defects that could not be contained in phases prior to SIT Measure the benefit of detecting high severity defects early before it goes to users Provides snapshot of how many defects detected per diff. categories
YES [Quality Centre to be streamlined for Severity Values] YES [Quality Centre to be streamlined for Category Values]
Defect by Category
NOTE: These are few initial proposed metrics. List will be refined 23
KPI
# of Rejected defects in SIT / Total Defects Time Taken to fix the defect No. of test cases planned vs. executed
NO
NO
NOTE: These are few initial proposed metrics. List will be refined 24
VOICE OF CUSTOMER
Metrics
Voice of Customer
KPI
Average of individual ratings from the feedback form 1. 2.
Measured For
Programs
YES / NO
YES
Frequency
At the completion of UAT for Each release in the program, Or Each iteration of the program, Or Any other agreed milestone At the completion of the UAT phase of the project None
Projects Enhancements
YES NO??
25
Was the testing team efficient in responding to the projects and businesss needs?
Has the testing team aligned to the testing schedule and project budget as agreed? How do you rate the Quality of Test Plan & Control Deliverables Test Plan, Test Estimates, Daily Test Status and Test Closure Report? How do you rate Testers Efficiency w.r.t Test cases, Test Execution and Quality of Defects logged? How do you rate the usage and effectiveness of tools used (QC / QTP / LR)? How do you rate Testing teams efficiency w.r.t defects seepage to UAT?
26
How would you rate the overall experience of testing services offered?
Equivalent Rating
5 4 3 2 1
27
On Completion of the project, Test manager sends the Testing feedback form to the Project Manager
Project Manager receives the Testing feedback form and update the feedback
Project Manager receives the filled up feedback form and compute the average rating from the feedback on diff. parameters
Yes
No
28
KPI
# of Defects seeped to UAT 1. 2.
Measured For
Programs
YES / NO
YES
Frequency
At the completion of UAT for Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the UAT phase of the project Monthly consolidated for all the enhancements went live in a month
Projects Enhancements
YES YES
29
Test Manager Analyzes the UAT defects and compare it with the established project and TSID level goals
YES
NO
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
30
KPI
# of Test cases executed per day per tester 1. 2.
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the project Monthly consolidated for all the enhancements went live in a month
Projects Enhancements
YES YES
31
During the test execution phase, tester would log the test cases executed per day. Execution time is logged in QC
At the completion of Test Execution, Test Manager would collate this data and compute Test Execution Productivity
Yes
No
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
32
COST OF QUALITY
Metrics
Cost of Quality
KPI
[(Appraisal Effort + Prevention Effort + Failure Effort) / Total Project effort] * 100 1.
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the project N/A
Projects Enhancements
YES NO
33
During the testing life cycle, Testing team log the efforts in ATLAS
At the completion of Project/Release, Test Manager would collate this data and compute Cost Of Quality
Yes
No
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
34
AUTOMATION ROI
Metrics
Automation ROI
KPI
Total benefit derived from automation / Total cost of automation 1.
Measured For
Programs, if Automation is involved
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
YES
NO
N/A
35
During the testing life cycle, Testing team log the efforts in ATLAS
At the completion of Project/Release, Test Manager would collate this data and compute Automation ROI
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
36
DEFECT BY SEVERITY
Metrics
Defect by Severity
KPI
# of Defects per Severity 1.
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the project Monthly consolidated for all the enhancements went live in a month
Projects Enhancements
YES YES
At the completion of Project/Release, Test Manager would collate this data and compute Defect by Severity
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
38
DEFECT BY CATEGORY
Metrics
Defect by Category
KPI
# of Defects per Category 1.
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the project Monthly consolidated for all the enhancements went live in a month
Projects Enhancements
YES YES
At the completion of Project/Release, Test Manager would collate this data and compute Defect by Category
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
40
Schedule Variance
Metrics
Schedule Variance
KPI
[(Actual End Date - Estimated End Date)/ (Estimated End Date - Estimated Start date)] X100 1. 2.
Measured For
Programs
YES / NO
YES
Frequency
At the completion of TSID activities for: Each release in the program, Or Each iteration of the program, Or Any other agreed milestone At the completion of the TSID activities of the project Monthly consolidated for all the enhancements went live in a month
Projects Enhancements
YES YES
41
Test Manager Analyzes the schedule Variation and compare it with the established project and TSID level goals
YES
NO
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
42
KPI
# of Test cases created per day per tester 1. 2.
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the project N/A
Projects Enhancements
YES NO
43
During the test design phase, tester would log the test cases designed per day. Test Design time is logged in QC
At the completion of Test Execution, Test Manager would collate this data and compute Test Design Productivity
Yes
No
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
44
REVIEW EFFECTIVENESS
Metrics
Review Effectiveness
KPI
Internal vs external Review feedback of Test cases 1. 2.
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the project N/A
Projects Enhancements
YES No
45
During the test design phase, tester would conduct peer review of the test cases designed by peers
Test Manager would collate this data and compute Review Effectiveness
During the test design phase, the test cases would be peer reviewed by external stakeholders
Yes
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
46
KPI
Requirements mapped to Test Scenarios / cases 1. 2.
Measured For
Programs
YES / NO
YES
Frequency
During the course of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
During the course of the project N/A
Projects Enhancements
YES NO
47
During the test design phase, tester would map the test case with the requirements. This will be managed/updated throughout test lifecycle
During the course of testing lifecycle, Test Manager would compute this metrics to ensure that all testable requirements are tested
Close
48
KPI
(Total Defects - Defects not mapped to Test cases) / Total # test cases ) x 100 1.
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the project N/A
Projects Enhancements
YES NO
49
During the test execution phase, tester log the defect and map it to the test case defined
At the completion of Test Execution, Test Manager would collate this data and compute Test Case Effectiveness
Yes
No
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
50
KPI
# of Defects seeped to SIT from earlier phases 1.
Measured For
Programs
YES / NO
YES
Frequency
At the completion of UAT for Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the UAT phase of the project Monthly consolidated for all the enhancements went live in a month
Projects Enhancements
YES YES
Test Manager Analyzes which phase contributed the maximum defects to SIT
Test manager submit the data to PM and may participate in the Causal Analysis with stakeholders, if required
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
52
KPI
# of Rejected defects in SIT / Total Defects 1.
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone At the completion of the project Monthly consolidated for all the enhancements went live in a month
Projects Enhancements
YES YES
During the test execution phase, tester log the defect and its status
At the completion of Test Execution, Test Manager would collate this data and compute Defect Rejection Ratio
Yes
Is the Defect Rejection ratio very high as compared to TSID or project level goals?
No
Submit it to Metrics Manager for the Trend graph and for TSID level analysis
Close
54
KPI
Time Taken to fix the defect across various severity levels 1.
Measured For
Programs
YES / NO
YES
Frequency
At the completion of Each release in the program, Or Each iteration of the program, Or Any other agreed milestone
At the completion of the project N/A
Projects Enhancements
YES NO
During the test execution phase, tester log the defects open and close date
At the completion of Test Execution, Test Manager would collate this data and compute Defect Ageing by Severity of Defects Test manager submit the data to PM and may participate in the Causal Analysis with stakeholders, if required Yes
No
Test manager submit the data to PM and may participate in the Causal Analysis with stakeholders, if required
Close
56
KPI
No. of test cases planned vs. executed 1.
Measured For
Programs Projects Enhancements
YES / NO
YES YES NO
Frequency
Ongoing as part of Test execution Ongoing as part of Test execution N/A
During the test execution phase, tester will updated test cases executed against planned Test cases
As an ongoing activity Test manager would compute test cases planned vs. executed
Test manager reports this metrics on a daily basis as part of daily status
Yes
No
Close
58