Sie sind auf Seite 1von 103

1

What is Software? Types of Software Testing Entrance Testing Concepts & Tools in Market What is Software Testing? Why Testing in Organizations? What is Quality? How to get Quality Software? Quality Standards ISO CMM SIX SIGMA Quality Assurance System (QAS) Software Development LifeCycle (SDLC) SDLC Models Fish Model Waterfall Model Prototyping Model RAD model Component Assembly Model Spiral Model V-Model Testing Methodologies White Box Testing Black Box Testing System Testing Usability Testing Functional Testing BVA & ECP Ad-Hoc Testing Performance Testing Security Testing Maintenance Testing UAT

What is Manual Testing? Error, Defect, Bug Defect Lifecycle or Bug Lifecycle Why does Software have Bugs? Testing Documents (R&R) Good Testing Engineer (or) Quality Assurance Engineers General STLC HP STLC IBM STLC Test Initiation Test Policy Test Strategy Testing Factors (or) Issues Test Methodology (TRM) Test Plan Test Design

Test Cases Examples on Test Cases Preparation Use Cases Traceability Matrix (RTM) Test Execution Build Version Control Test Harness Sanity Testing & Smoke Testing Comprehensive Testing Regression testing Final Regression Testing Test Reporting (or) Defect Reporting Resolution Type Types of Bugs Test Closing Sign Off Testing Metrics Certifications for Testing

What is Automation Testing? Differ Manual Vs Automation Automation Advantages/Dis-Advantages Why Automation? Automation Testing Tools Automated Testing Process

Introduction QTP Testing Process QTP Installation QTP Starting Process Add-In Manager Using Sample Applications QTP Window Test Pane Keyword View Expert View Working with Actions Active Screen Data Table Debug Viewer Pane Using Quick Test Commands Designing Tests & Components Planning Tests & Components Recording Tests & Components Choosing Recording Modes Changing the Active Screen Creating, Opening, Saving Tests or Components with Locked Resources Checkpoints Standard Checkpoint

Image Checkpoint Table Checkpoint Database Checkpoint Text Checkpoint Text Area Checkpoint Bitmap Checkpoint XML Checkpoint Adding Checkpoints to a Test or Component Modifying Checkpoints Parameterzing Values Data Driver Wizard Outputting Values Configuring Values Learning Virtual Objects Working with Data Tables Recovery Scenarios Configuring Quick Test Testing WEB Objects Testing VB Applications Testing Active X Controls Object Repository Object Spy Object Identification User Defined Functions Quality Center Connection QTP Interview Questions

Introduction QC Testing Process Starting Quality Center Quality Center Window Sample Web Sites Specifying Requirements Planning Tests Designing Test Steps Running Tests Running Tests Manually Running Tests Automatically How to Track Defects Adding New Defects Matching Defects Updating Defects Mailing Defects Crating Favorite Views Triggering a Traceability Alert Creating Follow up Alerts Generating Reports Generating Graphs QC Interview Questions

Software Quality:
To say software is a quality one, it has to satisfy four factors 1. Meet customer requirements 2. Meet customer expectations 3. Time to market 4. Cost to purchase Technical Factors Non-Technical Factors

Software Quality Assurance: (SQA)


Maintaining and measuring the strength of the development process. Eg: Life Cycle Testing.

Life Cycle Development: SDLC


1. 2. 3. 4. 5. 6. Stages in S/W Development Life Cycle Information Gathering Analysis Design Coding Testing Maintenance & Implementation

Research and Development Once the Market Research is carried out, the customer's need is given to the Research & Development division (R&D) to conceptualize a cost-effective system that could potentially solve the customer's needs in a manner that is better than the one adopted by the competitors at present. Once the conceptual system is developed and tested in a hypothetical environment, the development team takes control of it. The development team adopts one of the software development methodologies that is given below, develops the proposed system, and gives it to the customer. System/Information Engineering and Modeling As software is always of a large system (or business), work begins by establishing the requirements for all system elements and then allocating some subset of these requirements to software. This system view is essential when the software must interface with other elements such as hardware, people and other resources. System is the basic and very critical requirement for the existence of software in any entity. So if the system is not in place, the system should be engineered and put in place. In some cases, to extract the maximum output, the system should be re-engineered and spruced up. Once 6

the ideal system is engineered or tuned, the development team studies the software requirement for the system. Software Requirement Analysis This process is also known as feasibility study. In this phase, the development team visits the customer and studies their system. They investigate the need for possible software automation in the given system. By the end of the feasibility study, the team furnishes a document that holds the different specific recommendations for the candidate system. It also includes the personnel assignments, costs, project schedule, target dates etc.... The requirement gathering process is intensified and focused specially on software. To understand the nature of the program(s) to be built, the system engineer or "Analyst" must understand the information domain for the software, as well as required function, behavior, performance and interfacing. The essential purpose of this phase is to find the need and to define the problem that needs to be solved. System Analysis and Design In this phase, the software development process, the software's overall structure and its nuances are defined. In terms of the client/server technology, the number of tiers needed for the package architecture, the database design, the data structure design etc... are all defined in this phase. A software development model is thus created. Analysis and Design are very crucial in the whole development cycle. Any glitch in the design phase could be very expensive to solve in the later stage of the software development. Much care is taken during this phase. The logical system of the product is developed in this phase. Code Generation The design must be translated into a machine-readable form. The code generation step performs this task. If the design is performed in a detailed manner, code generation can be accomplished without much complication. Programming tools like compilers, interpreters, debuggers etc... are used to generate the code. Different high level programming languages like C, C++, Pascal, Java are used for coding. With respect to the type of application, the right programming language is chosen. Testing Once the code is generated, the software program testing begins. Different testing methodologies are available to unravel the bugs that were committed during the previous phases. Different testing tools and methodologies are already available. Some companies build their own testing tools that are tailor made for their own development operations. Maintenance The software will definitely undergo change once it is delivered to the customer. There

can be many reasons for this change to occur. Change could happen because of some unexpected input values into the system. In addition, the changes in the system could directly affect the software operations. The software should be developed to accommodate changes that could happen during the post implementation period.

Life Cycle Testing:


[Upper Line: Life Cycle Development] Analysis Design Coding
Maintenance

System Testing

SRS Functioinal & System


Infor Gathering (BRS)

HLD LLD

Programs
BlackBox Testing

Review

Review Prototype

White Box Testing

Test s/w Changes

[Lower Line: Life Cycle Testing]

Fig: Fish Model Software Development


BRS: Business Requirement Software In this Fish model, Analysis, Design, Coding phases are called Verification. System Testing and Maintenance are called Validation. BRS : (Business Requirement Specification) This document defines customer requirements to be developed as software. This document is also known as Customer Requirement Specification (CRS) and User Requirement Specification (URS). SRS : (Software Requirement Specification) This document defines functional and system requirements to be used. Review : It is a static testing technique to estimate the completeness and correctness of a document. HLD : (High Level Design) This document defines overall hierarchy of the system from root module to leaf modules. This HLD is also known as External Design. LLDs : (Low Level Design) This document defines the internal logic of every sub modules in terms of structural logic (DFDs) and backend logic (E-R diagrams). 8

This LLD is also known as Internal Logic. Eg: DFDs, E-R diagrams, class diagrams and object diagrams etc. Prototype : A sample model of an application without functionality (i.e. only screens) is called prototype. Eg: Power Point Slides. White Box Testing : It is called a coding level testing technique to estimate the completeness and correctness of a program in terms of Internal Logic. Black Box Testing : It is a build level testing technique (i.e. .exe form of the software program). During this test, test engineers validate completeness and correctness of every functionality in terms of customer requirements. Software Testing Testing. : The verification and validation of a software is called software

Verification : Whether the system is right or wrong? Validation : Whether the system is right system or not? (with respect to customer requirements). Note : The above discussed model is almost implemented by all companies to produce the quality software. The above model is refined in depth i.e is called V-Model.

V-Model

: V- Stands for Verification and Validation. This model defines mapping between software development stages and testing stages. This model is derived from Fish Model. Testing Development Information Gathering & Analysis 1. Design 2. Coding 1. Assessment of Development Plan 2. Prepare Test Plan 3. Requirements Phase Testing 1. Design Phase Testing 2. Program Phase Testing 1. Functional Testing (WB) 2. User Acceptance Testing 3. Test Documents Management 9 1. Port Testing 2. Test Software Changes 3. Test Efficiency

Build

Maintenance

Refinement Form of V-Model : V-Model is expensive process for small scale and
medium scale organizations. Due to this reason, small scale and medium scale organizations are maintaining separate testing team for System Level Testing only (i.e. Black Box Testing).

BRS/CRS/URS
Review

User Acceptance Testing

S/wRS HLD
Review

System Testing (BB)

Integration Testing
(WB)

LLDs

Unit Testing

Coding I. Reviews During Analysis: In general, the software development process starts with information gathering and analysis. In this phase, Business Analyst Category people develop BRS and S/wR. To estimate the completeness and correctness of these documents, responsible Business Analysts are conducting reviews. In this review, they can follow below factors BRS
1. 2. 3. 4. 5.

S/wRS

Are they right requirements? Are they complete? Are they achievable? (With respect to Technology). Are they reasonable? (With respect to Time). Are they testable?

II. Reviews During Design: After completion of analysis and their reviews, design
category people concentrate on external design and internal design development. To estimate completeness and correctness of documents, they can follow below factors

10

HLD
1. 2. 3. 4. 5. Are they understandable? Are they met right requirements? Are they complete? Are they follow able? Does they handle errors?

LLDs

III. Unit Testing: After completion of design and their reviews, programmers
concentrate on coding to physically construct software. In this phase, programmers are conducting Unit Level Testing on those programs through White Box Testing technique. White Box Testing technique classified into three parts 1. Execution Testing Basis Paths Coverage Loops Coverage Program Technique Coverage Basis Paths Coverage: Means that every statement in the program is correctly participating in execution. For Eg: In If else statement, every such statement has to check two times for If separately and else part separately. Loops Coverage: Means that to check whether the loop is correctly terminating or not, without going to infinite loop. Program Technique Coverage: A programmer is said to a good programmer, if the execution performs less number of memory cycles and CPU cycles. 2. Operations Testing: To check whether the program is running on customer expected platform or not? Platforms means that operating system, compilers, browsers etc. 3. Mutation Testing: Mutation means that a complex changes in program logic. Programmers are following this technique to estimate completeness and correctness of a program testing. _____ Tests _____ _____ _____ _____ Tests _____ _____ Change _____ _____ Tests _____ _____ Change _____ _____

Passed During Testing time all tests are passed

Passed (Incomplete) 11 After changes also all tests are passed.

Passed Failed After changes some tests passed & some tests failed

Fig: Showing the different tests after a complex change.

IV. Integration Testing: After completion of dependent modules development


and testing, development in composing them to form a build and conducts integration testing to verify completeness and correctness of that modules composition. There are three approaches in Integration testing such as a) Top-down Approach: In this approach, conducts testing on main module without coming to some of the sub modules is called Top-down approach. Developers are using temporary programs instead of under constructive sub modules, which are called as stubs. Stubs are those which deactivate the flow to under constructive module and return the flow to main module. Stub calls the main module. Main Stubs Sub Sub 111 2 b) Bottom-up Approach: In this approach, conducts testing on sub modules without coming from main module is called Bottom-up approach. Developer used a temporary program instead of under constructive main module called driver. Driver is that which activates the flow to sub modules. Drivers are called by the sub module. Main Driver Sub 1 Sub 2 c) Hybrid Approach: This approach is a combination of Top-down and Bottom-up approaches. This approach is also known as SandWitch Approach.

12

Main Driver Sub 1 Stubs Sub 2 Sub 3

Build: A finally integrated all module set in .exe form is called build or system.

V. Functional & System Testing: After completion of all possible modules


integration as a system. The separate testing team in an organization validates that build through a set of Black Box testing techniques. These techniques are classified into four divisions such as

1. 2. 3. 4.

Usability Testing Functional Testing Performance Testing Security Testing

Usability Testing: In general, a System Level Testing starts with usability. This is
done in early days of the job. In this level, testing team follows two testing techniques. a) User Interface Testing: Under this testing technique, we have three categories Ease of use (i.e. understandable screens to the user) Look & Feel (Attractiveness of screens) Speed in Interface (Sharp navigations to complete a task) b) Manual Support Testing: In this technique, context sensitiveness (with respect to work or task) of the user manuals. This is done end days of the job. Eg: Help documents of user manuals. Receive build from developers User Interface Testing

13

Remaining Functional & System Tests Manual Support Testing

Functional Testing: A mandatory part in black box testing is functional testing.


During this tests, testing team concentrate on meet customer requirements. Functional testing is classified into below sub tests There are some functionalities listed below Mail Chat Forgot Password Change Password Exit Here in this, there is one wrong placement, i.e. Forgot Password. If you forgot password how can you login to the window.

a) Sanity Testing: Testing the Over Functionalities in the Initial Build released to know that whether a development team released build is stable for complete testing or not? For example Specifying that it is not good without reason like, just watch is not working. b) Smoke Testing: Testing the Functionalities in Higher Level from End-to-End when a stable Build is released. In this level, testing team reject a build with reason when that build is not working to be applied complete testing.

For example to say that watch is not working due to key rod i.e. with reason. Receive build from developers

Sanity/Smoke Test Functional & System Test

User Acceptance Test (UAT) c) Input Domain Testing: It is a part of functionality testing, but test engineers are giving special treatment to input domains of object, such as Boundary Value BVA(Size/Range) Analysis (BVA) and Equivalence Class Partitions (ECP). ECP(Type) The BVA and ECP are as follows: Min = pass Min-1 = fail Min+1 = pass Max = pass Max-1 = pass Valid Invalid

14

Pass

Fail

BVA defines the range and size of the object. For example take age, the range is 1860, here the range is taken into consideration, not the size. ECP defines what type of characters it accepts is valid and remaining is invalid. Example 1: A login process allows user-id and password to authorize users. From designed documents, user-id allows alpha-numeric from 4-16 characters long and password allows in lower case from 4-8 characters long. Prepare BVA and ECP for user-id and password. User-id: BVA (Size) Min = 4 characters Min-1 = 3 characters Min+1 = 5 characters Max = 16 characters Max-1 = 15 characters Password: Max+1= 17 characters BVA (Size) Min = 4 characters Min-1 = 3 characters Min+1 = 5 characters Max = 8 characters Max-1 = 7 characters Max+1= 9 characters ECP(Type) Valid a-z A-Z 0-9 Valid Invalid Special Characters Blank Space ECP (Type) Invalid A-Z 0-9 Special Chars Blank Space

a-z

These are the BVA and ECP values; by these we can know the size or range of the object. Example 2: A textbox allows 12-digits numbers. In this number, * is mandatory and - is optional. Give the BVA and ECP for this textbox. Textbox: Valid BVA (Size) Min = Max = 12 digits = pass Min-1 = Max-1 = 11 digits = fail Min+1 = Max+1 = 13 digits = fail 15 ECP (Type) Invalid a-z, A-Z 0-9 without * Special Chars except *, Blank Space

0-9 with * 0-9 with *, -

d) Recovery Testing: It is also known as Reliability Testing. During this test, test engineers validates that whether our application build change from abnormal state to normal state or not? Suppose that if an application is terminated or power is off in middle of the process, that application should not hang the system it should give end user and then system should enter from abnormal state to normal state by the backup and recovery procedures. Normal State Abnormal State Backup & Recovery Procedures Normal State e) Compatibility Testing: It is also known as portability testing. During this test, test engineers validate that whether our application build run on customer expected platforms or not. During this test, test engineers are facing two types of compatibility problems such as forward compatibility and backward compatibility. Forward Compatibility Backward Compatibility

Build

Operating System

Build

Operating System

VB

UNIX

Orcle-95

WIN-98

Example for forward compatibility, is that VB program not working on UNIX platform, and means that our software is correct, but the operating system is having some defects. This case does not occur because; mostly the operating system will not have such defects. Example for backward compatibility is that Oracle-95 working on Windows-98. It means that Oracle-95 is developed for Windows-95, but it is working on Windows-98 also, and means that there is some defect in our application.

16

f) Configuration Testing: It is also known as Hardware Compatibility Testing. During this test, test engineers validates that whether our application build run on different technology hardware devices or not? Ex: Different technology printers, different technology LAN cards, different LAN topologies. These all should work to our application build. g) Installation Testing: Application + Supported Software
Install

Customer site like configuratio n

Setup Program Execution Easy Interface Occupied Disk Space

Easy interface is during installation and occupied disk space is checked after installation. h) Parallel Testing: It is also known as comparative testing. During this test, test engineers try to find competitiveness of our application product through comparison with other competitive product. This test is done only for the software product, not for Test Data application software. Input1 Input2 i) Ad-hoc Testing: A tester conducts a test on application build, depends on Min Min predetermined ideas, called Ad-hoc Testing. Based on the past experience tester tests the Min Max build of the project. Max Min j) Retesting: The re-execution of a test with multiple test data on same application Max Max build is called Retesting. Value 0 Value 0 Multiply Input 1: Input 2:
OK

Result: Expected Result = Input 1 * Input 2 k) Regression Testing: The re-execution of tests on modified build to ensure bug Tests fix work and possibilities of side effects occurrence is called Regression Testing. Impacted passed Tests Build Failed Tests Passed Failed 17 Developers

Modified Build

Remaining Tests

Defects

Testing on Modified Application Build VI. User Acceptance Testing (UAT): After completion of all possible functional and system tests, our project management concentrates on User Acceptance Testing to collect feedback from customer site people. There are two ways to conduct UAT Alpha-Test . For software applications . By real customers . In development site itself Beta-Test . For software product . By customer site like people . In customer site like environment

VII. Testing during Maintenance: After completion of User Acceptance Testing and their modifications, project management concentrate on release team formation. This team consists of few developers, few testers and few hardware engineers. This release team conducts port testing in customer site to estimate completeness and correctness of software installation in customer site. Compact installation Over all functionality Input devices handling Output devices handling Secondary storage devices Operating system error-handling Change Requests Co-existence with other softwares to share common resources After completion of port testing, release team concentrate on training sessions to be conducted for end users. During utilization of that software, customer site people are sending change requests to our organization.
Enhancement Missed Defect

Impact Analysis

Impact Analysis

Perform Change 18

Perform Change

Test Software Change

* Improve testing process capability

Defect Removal Efficiency (DRE):


DRE = A/A+B Here A is bugs found by testing team during testing. B bugs found by customer site people during certain period of maintenance.

Testing Terminology:
1. Monkey Testing or Chimpanzee Testing: The coverage of main activities in our application build during testing is called money testing. Due to lack of time, testing team follows this type of testing. For example Mail open, Mail compose, Mail reply, Mail forward are there we conduct only Mail open and Mail compose. Because, Mail reply and Mail forward is similar to Mail compose due to lack of time. 2. Exploratory Testing: The coverage of all activities in level by level during testing is called exploratory testing. Due to lack of knowledge on that application, test engineers are following this style of testing. It is done module by module, due to lack of knowledge on that entire module. 3. Big Bang Testing: A single stage of testing process after completion of entire system development is called Big Bang Testing. It is also known as Informal Testing. 4. Incremental Testing: A multiple stages of testing process from program level to system level are called Incremental Testing or Formal Testing. Eg: LCT (Life Cycle Testing). 5. Manual Vs Automation: A test engineer conducts any test on application build without using any software tool help is called Manual Testing. A test engineer conduct a test on application build with the help of a software tool is called Test Automation. Eg: A carpenter fitting screw without screw driver (manually) is manual testing and fitting screw with screw Build Build driver is called test automation. Tool 19 Manual Testing

Test engineer

Test Engineer Test Automation

Test Automation is done in two approaches they are as follows Impact of test Criticality of test Automation Manual Impact means that test repetition. Criticality means that complexity to be applied a test manually. Due to impact and criticality of test, test engineers concentrate on test automation. Note: From the definitions of Retesting and Regression testing, test repetition is a mandatory task in test engineers job. Due to this reason, test engineers are going to Test Automation.

20

Test Policy
Quality Control(QC)

Test Strategy
Quality Analysts(QA)

Company Level

Test Methodology Test Plan Test Cases Test Procedures Test Scripts
Project Level

Test Lead

Test Engineers

Test Log Defect Report

I) Test Policy:- It is a company level document and developed by Quality Control people (QC almost management). The below abbreviations are follows: LOC Lines of code FP Functional Points (i.e. Number of screens, inputs, outputs, queries, forms, reports) QAM Quality Assessment Measurement TMM Test Management Measurement PCM Process Capability Measurement.

21

xxxxxxxxxxxxxxx xxxxxx Testing Definition Testing Process Testing Standard :- Verification + Validation

Address of Company

:- Proper planning before starts testing :- 1 Defect per 250 LOC/1 defect per 10 FP

Testing Measurements:- QAM, TMM, PCM xxxxxxxxx (C. E. O) Above Test Policy can defines, Testing Objective. To meet that objective, Quality Analyst people can define, Testing Approach through a Test Strategy document. II) Test Strategy:- It is a company level document and developed by Quality Analyst (QA) people. This test strategy defines a common testing approach to be followed. Components in Test Strategy:1) 2) Scope and Objective:- About Organization, Purpose of testing and testing objective. Business Issues:- Budget control for testing. Eg: 100% Project Cost

64% 36% S/W Development Testing & Quality & Maintenance Assurance Testing Approach:- Mapping between testing issues and development stages (V-Model). This testing approach is done in the following way i.e. in matrix form. That matrix is called as Test Responsibility Matrix (TRM)/Test Matrix (TM). It is mainly based on he Development stages and Test factors. In Development Stages we have, five stages, and in testing factors side we have, fifteen factors. These are shown in the following figure: 3)

22

Development Stages Test Factors 1) Ease of Use 2) Authorization

Information Gathering & Design Analysis

Coding

System Testing

Maintenance

Depends Change Request

on

4)

Roles and Responsibilities:- Names of jobs in testing team and their responsibilities during testing. 23

5) 6) 7) 8) 9) 10) 11) 12)

Test Deliverables:- Required testing documents to be prepared during testing. Communication and Status Reporting:- Required negotiation between every two consecutive jobs in testing team. Defect Reporting & Tracking:- Required negotiation between testing team and development team to track defects. Testing Measurements & Metrics:- QAM, TMM, PCM. Risks & Mitigations:- List of expected failures and possible solutions to over come during testing. Training Plan:- Required training sessions to testing team to understand business requirements. Change & Configuration Management:- How to handle change requests of customer during testing and maintenance. Test Automation & Tools:- Required possibilities to go to automation.

Test Factors:- To define a quality software, Quality Analyst people are using fifteen test factors. 1) Authorization:- Whether a user is valid or not valid to connect to application? 2) Access Control:- Whether a valid user have permissions to use specific services or not? 3) Audit Trail:- Whether our application maintains Metadata about user operations or not? 4) Continuity of processing:- The integration of internal modules for control and data transmission (Integration Testing). 5) Correctness:- Meet customer requirements in terms of functionality. 6) Coupling:- Co-existence with other existing software (Inter System Testing) to share common resources. 7) Ease of use:- User friendliness of screens. 8) Ease of operate:- Installation, Uninstallation, dumping (on computer to other computer), downloading, uploading.

24

9) File Integrity:- Creation of back up during execution of our application (For recovery). 10) Reliability:- Recover from abnormal states. 11) Portability:- Run on different platforms. 12) Performance:- Speed in processing. 13) Service levels:- Order of functionalities. 14) Maintainable:- Whether our application build is long time serviceable in customer site or not? 15) Methodology:- Whether test engineers are following standards or not during testing.

Test Factors Vs Black Box Testing Techniques


1) Authorization Security Testing Functionality/Requirements Testing. Security Testing Functionality/Requirements Testing. Functionality/Requirements Testing (Error-Handling Coverage) Integration Testing (White Box Testing). Functionality/Requirements Testing Inter Systems Testing. User Interface Testing Manual Support Testing Installation Testing Functionality/Requirements Testing Recovery Testing Recovery Testing (One-user level) Stress Testing (Peak load level) Compatibility Testing Configuration Testing (H/W) 25

2) Access Control

3) Audit Trail 4) Continuity of Processing 5) Correctness 6) Coupling 7) Ease of use 8) Ease of Operate 9) File Integrity

10) Reliability 11) Portability

12) Performance

Load Testing Stress Testing Data Volume Testing Storage Testing Functionality/Requirements Testing Stress Testing (Peak load) Compliance Testing

13) Service Level

14) Maintainable

15) Methodology Compliance Testing (Whether our testing teams follow testing standards or not during testing?) Quality By Quality Control people (QC) Test Factors By Quality Analyst Testing Techniques By Test Lead Test Cases By Test Engineers III) Test Methodology:- It is a project level document and developed by Quality Analyst or corresponding Project Manager(PM). The test methodology is a refinement form of the test strategy with respect to corresponding project. To develop a test methodology from corresponding test strategy, QA/PM follows below approach. Step 1: Acquire test strategy Step 2: Identify Project Type Project Type Analysis Design Coding System Testing Maintenance

1) Traditional Project 2) Off_the_shelf Project (Out Sourcing) 3) Maintenance Project (OnSite Project)

26

Note:- Depends on project type, Quality Analyst(QA) or Project Manager(PM) decrease number of columns in TRM (Test Responsibility Matrix) means i.e. in development stages. Step 3: Determine Project Requirements Note: Depends on current project version requirements, Quality Analyst(QA) or Project Manager(PM) decrease number of rows in TRM, means that is done in Test Factors. Step 4: Determine the scope of project requirements. Note: Depends on expected future enhancements, Quality Analyst(QA) or Project Manager(PM) can add some of the previously removed test factors into TRM. Step 5: Identify tactical risks Note: Depends on analyzed risks, Quality Analyst(QA) or Project Manager(PM) decrease some of selected rows in TRM. Step 6: Finalize TRM for current project, depending on above analysis. Step 7: Prepare system test plan. Step 8: Prepare modules test plans if required. Testing Process:Test Initiatio n Test Close r Test Plan Test Design Test Execution

Test Reporting

27

IV) Test Planning:- After completion of test methodology creation and finalization of required testing process, test lead category people concentrate on test planning to define What to test?, When to test?, How to test?, Who to test?. Test plan Format:- (IEEE) 1) 2) 3) 4) 5) 6) Test Plan_ID:- Unique number or name Introduction:- About project Test Items:- Modules or functions or services or features Features to be tested:- Responsible modules for test designing. Features not to be tested:- Which ones & why not? Approach:- Selected testing techniques to be applied on above modules. (Finalized TRM by Project Manager) 7) Testing Tasks:- Necessary tasks to do before starts every feature testing. 8) Suspension Criteria:- What are the technological problems, raised during execution of above features testing. 9) Feature pass or fail criteria:- When a feature is pass and when a feature is fail. 10) Test Environment:- Required hardwares and softwares to conduct testing on above modules. 11) Test Deliverables:- Required testing documents to be prepared during above modules testing by test engineers. 12) Staff & Training needs:- The names of selected test engineers and required training sessions to them to understand business logic. (i.e. Customer Requirement) 13) Responsibilities:- Work allocation to above selected testers, in terms of modules. 14) Schedule:- Dates and time 15) Risks & Mitigations:- Non-technical problems raised during testing and their solutions to over come. 16) Approvals:- Signatures of Project Manager or Quality Analyst & Test Lead. 3, 4, 5 Defines What to test? 6, 7, 8, 9, 10, 11 Defines How to test? 12, 13 Defines Who to test? 14 Defines When to test? To develop above like test plan document, test lead follows below work bench (approach).

Development plan, S/wRS, Design Documents Inputs

1) Testing team Formation 2) Identify Tactical Risks 3) Prepare Test Plan Test Plan 4) Review Test Plan Outputs 28

Finalized TRM

1) Testing Team Formation:- In general, the test plan process starts with testing team formation, depends on below factors Availability of test engineers Possible test duration Availability of test environment resources. Case Study: Test Duration:Client/Server, Web Applications, ERP (like SAP) 3-5 months of System Testing. System Software (Net working, compilers, Hard ware related projects) 7-9 months of System Testing. Machine Critical (Like Satellite projects) 12-15 months of System Testing. Team Size:- Team size is based on the developers and expressed in terms of ratios i.e. Developers : Testers = 3 : 1 2) Identify Tactical Risks:-After completion of testing team formation, test lead concentrate on risks analysis or cause-root analysis. Examples:Risk 1: Lack of knowledge on that domain of test engineers (Training sessions required to test engineers.) Risk 2: Lack of budget (i.e. Time) Risk 3: Lack of resources. (Bad Testing environment, in terms of facilities) Risk 4: Lack of test data (Improper documents, and mitigation is Ad-Hoc testing, i.e. based on past experience). Risk 5: Delays in delivery (in terms of job completion, mitigation is working for over time). Risk 6: Lack of development process rigor. (Rigor means seriousness) Risk 6: Lack of communication 3) Prepare Test Plan:- After completion of testing team formation and risks analysis, test lead prepare test plan document in IEE format. 4) Review Test Plan:- After completion of test plan document preparation, test lead conducts reviews on that document for completeness and correctness. In this review, test lead applies coverage analysis. Requirements based coverage (What to test?) Risks based coverage (Who & When to test?) TRM based coverage (How to test?) V) Test Design:- After completion of test planning and required training sessions to testing team, test design will come in to state. In this state, test engineers are preparing test cases for responsible modules, through three test case design methods. a) Business logic based test case design b) Input domain based test case design 29

c) User interface based test case design a) Business logic based test case design:- In general, test engineers are preparing test cases depending on use cases in S/wRS. Every use case in S/wRS, describes that how to use a functionality? These use cases are also known as functional specifications (FS). Business Requirement

Use Cases/ Functional Specs HLD LLDs Coding .exe

Test Cases

From the above model, every test case describes that a test condition to be applied. To prepare test cases depending on use cases, test engineers are following below approach. Step 1: Collect all required use cases of responsible module Step 2: Select a use case and their dependencies from that collected list.

Use case Determinant

Use case

Use case Dependent

Login Mail Log out Module 2.1: Identify entry condition (Base State) (First operation user-Id- in login)

30

2.2: Identify input required (Test Data) 2.3: Identify Exit condition (End state) User-Id is last operation in login 2.4: Identify outputs & outcome Eg: xx Input 1: Input 2: OK Result: xx xx User_Id: Password: OK

xx xx Inbox Window

Output(means value)

Outcome(means process change state)

2.5: Study normal flow (Navigation or procedure) 2.6: Study alternative flows and exceptions? Step 3: Prepare test cases depending on above collected information from use case. Step 4: Review that test cases for completeness and correctness. Step 5: Go to Step 2 until all completion of all use cases study. Use Case 1: A login process user_id and password to authorize users. User-Id allows alphanumerics in lower case from 4-16 characters long. Password allows alphabets in lower case from 4 to 8 characters long. Sol: Test Approach: Manual Testing Test Condition: Login Process Test Techniques: BVA/ECP Test Documents: Use cases, S/WRS, Design Docs., Test Procedure, and Test Log Test Case ID: Login_TC_01 Test Case 1: Successful entry of User_id BVA (Size) Min =4 chars Pass Max =16 chars Pass Min-1=3 chars Fail Min+1=5 chars Pass Max-1=15 chars Pass Max+1=17 chars Fail ECP (Type) Valid Invalid a-z, 0-9 A-Z Special chars Blank space

31

BVA (Size) Test =4 Case 2: Successful Min chars Pass entry of password Max =8 chars Pass Min-1=3 chars Fail Min+1=5 chars Pass Max-1=7 chars Pass Max+1=9 chars Fail Test Case 3: Successful login operation. User_Id Valid Valid Invalid Value Blank Password valid invalid valid Blank Value Criteria Pass Fail Fail Fail Fail

ECP (Type) Valid Invalid a-z 0-9, A-Z Special chars Blank space

Use Case 2:- An insurance application allows users to select different types of policies. From a use case, when a user select type B insurance, system asks age of the customer. The age value should be greater than 18 years and should be less than 60 years. Test case 1: Successful selection of policy type B insurance Test case 2: Successful focus to age, when you selected type B insurance Test case 3: Successful entry of age BVA (Range) Min= 19 years Pass Max= 59 years Pass Min-1= 17 yrs Fail Min+1= 20 yrs Pass Max-1= 58 yrs Pass Max+1= 60 yrs Fail ECP (Type) Valid Invalid a-z, A-Z, 0-9 Special Chars Blank Space

Use Case 3:- In a shopping application, users can apply for different types of items purchase orders. From a purchase order Use case, user selects item number and enters quantity up to 10. After inputs filling, system returns one item price and total amount with respect to quantity. Test case 1: Successful selection of item number Test case 2: Successful entry of quantity BVA (Range) Min= 1 Pass Max= 10 Pass Min-1= 0 Fail Min+1= 2 Pass Max-1= 9 Pass Max+1= 11 Fail ECP (Type) Valid Invalid A-Z, a-z, 0-9 Special chars Blank Space

32

Test case 3: Successful calculation with Total = Price * Quantity Use Case 4:- Prepare test cases for a computer shutdown Test case 1: Successful selection of shut down operation using start menu Test case 2: Successful selection of shutdown option using alt+F4 Test case 3: Successful shutdown operation Test case 4: Unsuccessful shutdown operation due to a process in running Test case 5: Successful shutdown operation through power off. Use Case 5:- A door opened when a person comes in front of door and that door closed when person came inside the door. Test case 1: Successful open of door, when a person is in front of door. Person Present Absent Door Opened Opened Criteria Pass Fail

Test case 2: Successful door closing due to absence of the person. Door Opened Closed Closed Person Present Present Absent Criteria Pass Pass Fail

Test case 3: Successful door closed when a person cone to inside.

Door Closed Closed Opened Opened door.

Person Inside Outside Outside Inside

Criteria Pass Fail Pass Fail

Test case 4: Unsuccessful door closing due to person standing at middle of the Use Case 6:- Prepare test cases for money with drawl from ATM, with all rules and regulations. Test case 1: Successful insertion of card Test case 2: Unsuccessful card insertion due to wrong angle Test case 3: Unsuccessful card insertion due to invalid account. EG: Time expired or other bank card

33

Test case 4: Successful entry of PIN number. Test case 5: Unsuccessful operation due to wrong PIN number enter 3 times Test case 6: Successful selection of language Test case 7: Successful selection of account type Test case 8: Unsuccessful selection due to wrong accounts type selection with respect to that corresponding card. Test case 9: Successful selection of with-drawl operation Test case 10: Successful entry of amount Test case 11: Unsuccessful operation due to wrong denominations. (Test box Oriented) Test case 12: Successful with-drawl operation (Correct amount, right receipt & possibility of card come back) Test case 13: Unsuccessful with-drawl operation due to amount greater than possible balance. Test case 14: Unsuccessful with-drawl operation due to lack of amount in ATM Test case 15: Unsuccessful with-drawl operation due to server down Test case 16: Unsuccessful operation due to amount greater than day limit (Including multiple transactions also) Test case 17: Unsuccessful operation due to click cancel, after insert card Test case 18: Unsuccessful operation due to click cancel after insert card, enter PIN number Test case 19: Unsuccessful operation due to click cancel after insert card, enter PIN number, selection of language Test case 20: Unsuccessful operation due to click cancel after insert card, enter PIN number, selection of language, Selection of account type Test case 21: Unsuccessful operation due to click cancel after insert card, enter PIN number, selection of language, Selection of account type, selection of with-drawl Test case 22: Unsuccessful operation due to click cancel after insert card, enter PIN number, selection of language, Selection of account type, selection of with-drawl, after entering amount Test case 23: Number of transactions per day. Use Case 7:- Prepare test cases for washing machine operation Test case 1: Successful power supply Test case 2: Successful door open Test case 3: Successful water supply Test case 4: Successful dropping of detergent Test case 5: Successful clothes filling Test case 6: Successful door closing Test case 7: Unsuccessful door close due to clothes over flow Test case 8: Successful washing setting selection Test case 9: Successful washing operation Test case 10: Unsuccessful washing operation due to lack of water Test case 11: Unsuccessful washing operation due to clothes over load Test case 12: Unsuccessful washing operation due to improper power supply Test case 13: Unsuccessful washing due to wrong settings

34

Test case 14: Unsuccessful washing due to machine problems Test case 15: Successful dry clothes Test case 16: Unsuccessful washing operation due to water leakage from door Test case 17: Unsuccessful washing operation due to door opened in the middle of the process Use Case 8:- An E-Banking application allows users through internet connection. To connect to bank server our application allows values for below fields. Password: 6- digits number Area code: 3-digits number/ blank Prefix: 3-digits number but does not start with 0 & 1 Suffix: 6-digit alphanumeric Commands: Cheque deposit, money transfer, bills pay, mini statement Test case 1: Successful entry of password BVA (Size) Min = Max = 6 Pass Min-1 = Max-1 = 5 Fail Min+1 = Max+1 = 7 Fail Valid 0-9 ECP (Type) Invalid A-Z, a-z, Special Chars Blank Space

Test Case 2: successful entry of area code ``````````````````````````````````````````````````````````````````````````````````````````````````````` BVA (Size) ECP (Type) Min = Max = 3 Pass Valid Invalid Min-1 = Max-1 = 2 Fail 0-9 A-Z, a-z, Min+1 = Max+1 = 4 Fail Blank Space Special Chars Test case 3: Successful entry of prefix BVA (Range) Min = 200 Pass Max = 999 Pass Min-1 =199 Fail Min+1 = 201 Pass Max-1 = 998 Pass Max+1 = 4: 1000 Fail entry of suffix Test case Successful BVA (Size) Min = Max = 6 Pass Min-1 = Max-1 = 5 Fail Min+1 = Max+1 = 7 Fail ECP (Type) Valid Invalid 0-9 A-Z, a-z, Special Chars Blank Space

Valid 0-9 a-z A-Z 35

ECP (Type) Invalid Special Chars Blank Space

Test case 5: Successful selection of commands such as cheque deposit, money transfer, bills pay and mini statement. Test case 6: Successful connection to bank server with all valid inputs. Fields All valid Any one invalid Criteria Pass Fail

Test case 7: Successful connection to bank server with out filling area code. Remaining Fields All Valid Any one invalid Area Code Blank Blank Criteria Pass Fail

Test case 8: Unsuccessful connect to bank server with out filling all fields except area code. Remaining Fields With Valid Any one blank Area Code Blank Blank Criteria Pass Fail

Test case Format (IEEE):1) Test case-Id: Unique number or name 2) Test case name: The name of test condition 3) Feature to be tested: Module or function name (To be tested) 4) Test Suit-Id: The name of test batch, in which this case is a member 5) Priority: Importance of test cases in terms of functionality P0 Basic functionality P1 General Functionality (I/P domain, Error handling, Compatibility, Inter Systems, Configuration, Installation) P2 Cosmetic Functionality (Eg: User Interface Testing) 6) Test Environment: Required hardwares & softwares to execute this case 7) Test Effort (Person per hour): Time to execute this test case (Eg: Average time to execute a test case is 20 minutes) 8) Test Duration: Date and time to execute this test case after receiving build from developers. 9) Test Setup: Necessary tasks to do before starts this test case execution. 10) Test Procedure: A step by step process to execute this test case

36

Company Name Project Name Step. no

Company Logo Feature Data Input (Action)

My Organization

TESTCASE FORMAT
Use Case ID Actual Result Run1 Run2 Run3 UseCase Name Test Case ID

<Project ID.Version NO.>

Project circle ID

These are filled during test design These are filled during test execution 11) Test case Pass/Fail Criteria: When this case is pass & when this case is fail. Note: In general, test engineers are creating test case document with the step by step procedure only. They can try to remember remaining fields for the further test execution. Case Study 1:- Prepare test case document for Successful file save in notepad. 1) Test case-Id: Tc_save_1 2) Test case Name: Successful file save 3) Test Procedure: Step No 1) 2) 3) 4) Description I/P Required ------Expected Empty editor opened and save option disable

Open notepad Fill with text Click save Enter file name & click save

Valid text Save option enabled -------Save window appears with default file name

Unique Saved file name appears in title bar of notepad filename

Case Study 2:- Prepare test case document for Successful Mail Reply. 1) Test case-Id: Tc_Mail_Reply_1 2) Test case name: Successful Mail Reply

37

Step Description No 3) Test Procedure: 1) Login to site

I/P Required

Expected

Valid User_Id & Empty editor opened and save option Pwd ------------------disable Mail box page appears Mail message window appears (Mail Opened). ---------Compose window appears with To: Received Mail_ID Sub: Re:[Received Mail Sub] CC: Off BCC: Off Message: Received mail message with Comments Acknowledgement from server

2) 3)

Click Inbox link Select received Mail subject

4)

Click Reply

Enter new Valid text message and click send b) Input domain based test case design:- Sometimes test engineers are preparing some of the test cases depends on designed test cases. EG: Input domain test cases, because use cases are responsible for functionality description and not responsible to size and type of inputs. Due to above reason, test engineers are studying data models in low level design documents to collect complete information about size and type of every input object. EG: E-R diagrams During the study of data model, test engineers follows below approach Step 1: Collect data models of responsible modules Step 2: Study every input attribute in terms of size, type & constraints Step 3: Identify critical attributes, which are participating in internal manipulations Step 4: Identify non-critical attributes, which are just input/output type A/c No A/c Name Balance Address Critical Attributes

5)

Non-Critical Attributes

38

Step 5: Prepare data matrices for every input attribute

I/P Attribute

ECP (Type) Valid Invalid

BVA (Size/Range) Min Max

Data Matrix Note: If a test case is covering an operation, test engineers are preparing a step by step procedure for that test case. If a test case is covering an object, test engineers are preparing data matrix like table. For example: Login is a operation, and entering Uer_Id and Password are object. Case Study: A bank automation application allows fixed deposit operation from bank employees. This fixed deposit form allows below fields as inputs. Depositor name: Alphabets in lower case with initcap Amount : 1500 to 100000 Tenure(Time to deposit) : Up to 12 months Interest : Numeric with decimal point From the fixed deposit operation use case, if tenure greater than 10 months, then interest also greater than 10%. Prepare test case document for above scenario. Test case 1:1) Test case_Id: Tc_Fd_1 2) Test case name: Successful entry of depositor name 3) Data matrix I/P Attribute Depositor Name ECP (Type) Valid Invalid a-z, init cap A-Z, init a-z, 0-9, special Chars, Blank BVA (Size) Min Max 1 char 256 chars

Test case 2: 1) Test case_Id: Tc_Fd_2

39

2) Test case Name: Successful entry of amount 3) Data Matrix I/P Attribute Amount 0-9 ECP (Type) Valid Invalid A-Z, a-z, Special chars, Blank BVA (Range) Min Max 1500 100000

Test case 3: 1) Test case_Id: Tc_Fd_3 2) Test case Name: Successful entry of tenure 3) Data Matrix I/P Attribute Tenure 0-9 ECP (Type) Valid Invalid A-Z, a-z, Special Chars, Blank BVA (Size/Range) Min Max 1 month 12 months

Test case 4: 1) Test case_Id: Tc_Fd_4 2) Test case Name: Successful entry of interest 3) Data Matrix I/P Attribute Interest Name ECP (Type) Valid Invalid 0-9 A-Z, a-z, With decimal Special Chars, Blank BVA (Size/Range) Min Max 0.1 100

Test case 5: 1) Test case_Id: Tc_Fd_5 2) Test case Name: Successful fixed deposit operations

40

Step Description No 3) Test Procedure: 1)

I/P Required

Expected

Login to bank server Valid User_Id & Menu appears Pwd ----------

2) 3)

Click FD option Fill fields and Click OK

FD form appears

all valid fields Acknowledgement from server any one field is Error form server. Invalid

Test case 6: 1) Test case_Id: Tc_Fd_6 2) Test case Name: Successful fixed deposit operation when tenure is greater than 10months & interest also greater than 10% Step 3) Test Description Procedure: I/P No Required Expected 1) Login to bank server Valid User_Id & Menu appears Pwd ----------

2) 3)

Click FD option Fill fields and Click OK

FD form appears

Valid name, Deposit with Tenure>10 @interest >10 Acknowledgement from server @interest<=10 Error form server. Test case 7: Step Description I/P No 1) Test case_Id: Tc_Fd_7Required Expected 2) Unsuccessful fixed deposit operation with out filling all fields 3) Test procedure 1) Login to bank server Valid User_Id & Menu appears Pwd ---------Some as Blank 41

2) 3)

Click FD option Fill fields and Click OK

FD form appears ERROR message from application

c) User Interface based Test Case Design: To prepare test cases for usability testing, test engineers are depending on User Interface conventions (rules) in our organizations, Global user interface rules, (Eg: MicroSoft 6 rules) and interest of customer site people. Example Test cases:Test case 1: Spelling check Test case 2: Graphics check (Alignment, Font, Style, color, other Micro Soft 6 rules Test case 3: Meaning error messages Test case 4: Accuracy of data displayed (Reality is there or not). Example: 1) Amount: xxxx 2) Date of Birth: Amount: $ xxxx Date of Birth: Date of Birth: --/--/---/--/-(dd/mm/yy)

Test case 5: Accuracy of data in the data base as a result of user input

Form 10.768

Table 10.77

Report 10.77

Test case 6: Accuracy of data in the data base as a result of external factors. EG: Imported files. Test case7: Meaningful help messages. Note: Test case 1 to test case 6 indicates User Interface Testing, test case 7 indicates Manual Support Testing. Test cases Selection Review:- After completion of test cases design for responsible modules to test lead, for completeness and correctness checking. In this review, test lead follows coverage analysis Business Requirement based coverage Use Case based coverage Data Model based coverage User Interface based coverage

42

TRM based coverage At the end of this review, test lead prepare Requirement Traceability Matrix (RTM) and it is also known as Requirements Validation Matrix (RVM) and this matrix defines mapping between test cases and customer requirements. Business Sources Test Cases Requirements (Use Case, Data models) xxxxxxxxx xxxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx

xxxxxx xxxxxx

V!) Test Execution:- After completion of test design and their reviews, testing team concentrate on build release from development team. 1) Levels of test execution:Development Testing Initial Build Level-0 (Sanity/TAT/BVT) Stable Build Bug Fixing Level-1 (Comprehensive Testing) Bug Resolving Level-2 (Regression Testing) Level-3 (Final Regression/ Postmortem) 2) Test Execution Levels Vs Test Cases:Level-0 all P0 Test cases Level-1 all P0, P1 & P2 test cases as batches Level-2 Selected P0, P1 & P2 test cases with respect to modifications Level-3 Selected P0, P1 & P2 test cases with respect to high bug density modules.

43

3) Test Harness:- (Test Frame Work) It means that ready to start testing. Test Harness = Test Environment + Test Bed (Test Environment is hardwares and softwares required for testing, where as Test Bed means that testing documents. 4) Build Version Control:- During test execution, test engineers are receiving build from developers in below model (In general).

Build

Server FTP

SoftBase (Saved in one folder)

Test Environment From the above model, test engineers are downloading application from softbase in the server system, through networking protocols. (Eg: FTP File Transfer Protocol). To distinguish builds, development people are using unique version numbering system for that builds. This numbering system is understandable to testing team to distinguish old build and modified build. For this build version controlling, development people are using version control tools also. (i.e. Microsoft Visual Source Safe to maintain old code and modified code). 5) Level-0:- (Sanity Testing) In general, test execution starts with sanity testing to decide whether development team released build is stable for complete testing to be applied or not? During this sanity testing, testing team concentrate on below factors to verify. Understandable Operatable Observable Controllable Simplicity Maintainable

Testability Factors

44

Consistency Automatable When an application build allows above factors successfully, test engineers conducts complete testing on that build, other wise test engineers reject that build to developers. This type of Level-0 testing is also known as Tester Acceptance Testing/ Build Verification Testing/ Testability Testing/ Octangle Testing. 6) Test Automation:- After receiving stable build from developers, testing team concentrate on test automation if possible. In this test automation, test engineers conducts recording and check points insertion to create automated test script. Test Automation

* Selective Automation (All P0 Test cases & carefully selected P1 Test cases) From the above model, test engineers are not automating some of P1 test cases and all P2 test cases, because they are not repeatable and easy to apply. 7) Level-1 (Comprehensive Testing):- After receiving stable build and completion of possible automation, test engineers concentrate on real test execution as batches. Every test batch consists of a set of dependent tests (i.e. end state of every test is base state to other test). Test batches are also known as Test Suites or Test Sets. During this test batches execution, test engineers prepare Test Log document. This document consists of three types of entries. Passed, all expected values are not equal to actual. Failed, any one expected variates with actual Blocked, post-pone due to parent functionality is wrong Skip In Queue Blocked In Progress Passed Failed Partial Pass/Fail Closed

Complete Automation

Level-1 (Comprehensive Test Cycle) 8) Level-2 (Regression Testing):- During Level-1 test execution, test engineers are reporting mismatches to developers. After receiving modified build from them, testing team concentrate on Regression Testing to ensure completeness and correctness of that modification. Developers Resolved Bug Severity

45

High All P0 Test cases, all P1 Test cases, carefully Selected P2 test cases.

Medium All P0 test cases, carefully selected P1 test cases, Some of P2 test cases.

Low Some of P0 test cases, Some of P1 test cases, Some of P2 test cases.

Modified Build Case 1: If development team resolved bug severity is high, then test engineers are re-executing all P0, all P1 and carefully selected P2 test cases on that modified build. Case 2: If development team resolved bug severity is medium, then test engineers are re-executing all P0, carefully selected P1, and some of P2 test cases. Case 3: If development team resolved bug severity is low, then test engineers are re-executing some of P0, P1 and P2 test cases. * Case 4: If testing team received modified build from developers due to sudden changes in customer requirements, then test engineers are receiving all P0, all P1 and carefully selected P2 test cases with respect to sudden changes in requirements. VII) Test Reporting:- During test execution, test engineers are reporting mismatches to developers through an IEEE defect report. IEEE Format:1) Defect_Id: Unique number or name 2) Description: Summary of that defect. 3) Build version_Id: Current build version, in which above defect raised. 4) Feature: In which module of that build, you found that defect. 5) Test case Name: Corresponding failed test condition, which returns above defect. 6) Reproducible: Yes/No Yes If defect appears every time during test repetition No If defect does not appears every time (i.e. appears rarely) during test execution. 7) If yes, Attach Test Procedure: 8) If No, Attach snap shot and strong reasons: 9) Severity: Seriousness of defect with respect to functionality High not able to continue remaining testing before solving that defect. Medium Mandatory to solve, but able to continue remaining testing before solving that defect. Low may or may not to solve 10) Priority: The importance of defect to solve with respect to customer (high, medium, low) 11) Status: New/ reopen New For the first time test engineers are reporting defect to the developers. Reopen Re-reporting the defect again second time. 12) Reported by: Name of test engineer. 13) Reported on: Date of submission

46

14) Assigned to: The name of responsible person at development side to receive that defect. 15) Suggested fix: Possible reasons to accept and resolve that defect. _______________________________________________________________________ _ (By developers) 16) Fixed bug: (Acceptance or rejected) Project Manager or Project Lead 17) Resolved bug: Developer 18) Resolved on: Date of solving 19) Resolution type: Type of solution 20) Approved by: The signature of Project Manager Defect Age: The time gap between resolved on and reported on. Defect Submission: (i.e. Process) Defect submission process is different for large scale organization and small scale organization.

Defect Statuses:

If h de i g h, fect b s de ut re ever ve je ity lo c t p ers ed b is y

Quality Analyst

Test Manager Test Lead Test Engineers

* Project Manager
Team Lead Developers

Transmittal Reports Large Scale Organization Project Manager (PM)* Test Lead New Test engineer Developers (Open/Accepted) / (Closed/Rejected) / Differed Transmittal Reports Small & Medium Scale Organization Closed/Solved 47 (If Possible) Reopen (After performing regression testing) Team Lead

Bug Life Cycle: Detect Defect (Occurred due to error in coding) Reproducible defect (If the defect appears more than once) Report Defect Bug Fixing (Acceptable or not by the developers) Bug Resolving Bug Closing Resolution Type: (Receiving by developers to testers) Test engineer Defect Report Resolution Type There are twelve resolution types, used by developers to intimate to test engineers. 1) Duplicate, rejected due to this defect like as previously reported defect. 2) Enhancement, rejected due to this defect related to future requirements of customer. 3) Software Limitation, rejected due to this defect raised with respect to limitations of software technologies. 4) Hardware Limitation, rejected due to this defect raised with respect to limitations of hardware. 5) Not applicable, rejected due to improper meaning of the defect. Developers Fixing/ Review

48

6) Functions as designed, rejected due to coding is correct with respect to design documents. 7) Need more information, not accepted and not rejected but developers required extra information to fix. 8) Not reproducible, not accepted and not rejected but developers required correct procedure to reproduce that defect. 9) No plan to fix it, not accepted and not rejected but developers required extra time to fix. 10) Fixed, accepted and ready to resolve. 11) Fixed indirectly, accepted but postponed to future version (i.e. differed) 12) User misunderstandings, extra negotiation between developers and test engineers.

* Types of defects (bugs):1) User Interface bugs (Low Priority) Ex1: Spelling mistake (High priority based on customer requirements) Ex2: Improper right alignment (Low priority) 2) Input domain bugs (Medium Severity) Ex1: Does not allows valid type (High priority) Ex2: Allows invalid type also (Low priority) 3) Error handling bugs (Medium Severity) Ex1: Does not return error message (High Priority) Ex2: Incomplete meaning of error message (Low Priority) 4) Calculations bugs (High Severity) Ex1: Dependent outputs are wrong (High priority) Ex2: Final output is wrong (Low priority) 5) Race Condition bugs (High Severity) Ex1: Dead lock (High Priority Shoe Stopper) Ex2: Does not run on expected platforms (Low priority) 6) Hardware bugs (High Severity) Ex1: Device is not responding (High priority) Ex2: Wrong output device (Low Priority) 7) Load condition bugs (High Severity) Ex1: Does not allows multiple users (High Priority) Ex2: Does not allows customer expected load (Low priority) 8) ID_Control bugs (Medium Severity) Ex: Wrong logo, logo missing, wrong version number, version number missing, copy right window missing, tester name missing etc. 9) Version controlled bugs (Medium Severity)

49

Ex: Mismatches between two consecutive build versions. 10) Source bugs (Medium Severity) Ex: Mistakes in help documents. VIII) Test Closer:- After completion of all possible test cases execution and bug solving, test lead conduct test closer review meeting to estimate completeness and correctness of test execution process. In this review, test lead follows below factors: 1) Coverage Analysis Business requirement based coverage (BRS) Use cases based coverage (S/wRS) Data Model based coverage (Design Documents based) User Interface based coverage TRM based coverage (PM) 2) Bug Density Example: %of bugs found 20% 20% 40% 20% ------------100% ------------In the above example, more number of bugs found in Module-C, So there is need for Regression testing. Modules Name A B C D

3) Analysis of differed bugs: Whether the corresponding differed bugs are postponable or not? At the end of this test closure review meeting, test lead concentrate on Level-3 testing (i.e. Post Mortem Testing / Final Regression Testing/ Pre-Acceptance / Release Testing) The below figure brief idea:

Test Reporting

Gather Regression Requirements depends on Bug density

Effect Estimation

Regression Testing

50 Plan Regression

Person/ Hour

IX) User Acceptance Testing:- After completion of final regression, Project management concentrate on User Acceptance Testing to collect feed back from customer site people. There are two approaches to conduct this testing. 1) Alpha Testing 2) Beta Testing These are explained in previous topic. X) Sign Off:- After completion of User Acceptance Testing and their modifications, test lead prepare Final Test Summary Report (FTSR). This report is a part in Software Release Note (S/wRN). This Final Test Summary Report consists of below documents as members. Final Bugs summary report is of below format:

Bug Found Description By

Features

Severity

Status Comments (Closed/Differed)

Case Study 1: (Schedule for five months of Testing Process) Deliverable 1) Test cases Selection 2) Test cases Selection Review 3) Requirements 4) Level-0 Test automation 5) Level-1 & Level-2 6) Communication & Status Reporting 7) Defect Reporting & Tracking 8) Test Closure & Final Responsibility Test Engineers Test Lead & Test Engineers Test Lead Test Engineers Test Engineers Test Lead & Test Engineers Test Lead & Test Engineers Test Lead & Test 51 Completion Time 20-30 days 4-5 days 1-2 days 10-20 days 40-60 days Weekly twice On Going 4-5 days

Regression 9) User Acceptance Testing

10) Sign Off 1-2 days Case Study 2:1) What type of testing you are doing? 2) What type of testing process is going on at your company? 3) What type of documents will you prepare for testing? 4) What is your involvement in that document? 5) How will you select reasonable test to be applied on a project? 6) When will you go to automation? 7) What methods will you follow to prepare test cases? 8) What are the key components in your company test plan document? 9) What is your company test case format? 10) What is the meaning of Regression Testing and when will you do this? 11) What is the difference between error, defect and bug? 12) How will you report defects in your company? 13) What are the characteristics of the defect to define? 14) Explain bug life cycle? 15) How will you know whether your reported defect accepted / rejected? 16) What you do, when your reported defect rejected by developers? 17) What is the difference between defect age and build interval period? 18) What you do to conduct testing on unknown project? 19) What you do to conduct testing on unknown project with out documents? 20) What are the differences between V-Model and Life Cycle/ Water Fall/ SDLC?

Engineers Customer site people with involvement of testing team Test Lead

4-5 days

52

Performance Testing
It is an expensive testing division in black box testing. During this test, testing team concentrate on speed of the processing in our application build. Performance Testing classified into below testing techniques a) Load Testing: The execution of our application build under customer expected configuration and customer expected load to estimate performance is called Load Testing or Scalability Testing. Scale means number of concurrent users. b) Stress Testing: The execution of our application build under customer expected configuration and un-interval loads to estimate performance is called Stress Testing. For example take web browsing application, here customer expected configuration is 1000 users at a time, but test engineers test for customer expected configuration and some intervals i.e. for 2000 users, 200 users, 1 user and no users but it should give the same performance. c) Storage Testing: The execution of our application build under huge amount of resources to estimate storage limits is called Storage Testing. Example: When we take VB as front-end and MS-Access as back-end database, here MS-Access supports only 2 GB of storage, is a limitation. d) Data Volume Testing: The execution of application build under huge amounts of resources to estimate volume (size) of data in terms of records is called Data Volume Testing. In previous Data Volume Testing, we express the storage space in terms of GB, but here we express in terms of records.

Threshold Point Pe rfo rm an ce

Resources

53

LOADRUNNER
Test Initiation Test Planning Test Design Test Execution Test Reporting Performance Testing Test Closer From the above testing process, Performance Testing is mandatory for multi user applications. EG: Websites and Networking applications. But manual Load Testing is expensive to find performance of an application under load. Due to this reason, test engineers are planning to be applied Load Test Automation. EG: LoadRunner, SilkPerformer, SQALoadTest and JMeter.

LoadRunner 6.0: Developed by Mercury Interactive Load Testing Tool to estimate performance Support Client/Server, Web, ERP and Legacy Technologies (C, C++, COBOL) for Load Testing Create Virtual Environment to decrease Testing Cost.

Virtual Environment:-

54

RCL:- Remote Command Launcher, converts a local request into a remote request. VUGEN:- Virtual User Generator creates multiple virtual requests depending on a real remote request. Portmapper:- It submit all virtual user requests to single server process port. CS:- Controller Scenario returns performance results, during execution of server process to respond to multiple virtual user requests.

Client

Server Application Layer Transport Layer NetWork Layer Control Scenario (CS) Portmapper

Remote Command Launcher (RCL) VUGEN (Virtual User Generator)

NetWork Interface Card

Customer Expected Configuration + 1.5 KB RAM per one Virtual User


Time Parameters:- To measure the performance of software applications, we can use below time parameters: 1) Elapsed Time: Request

Clien Server Response Proces This means that the ttotal time for request transmission, process in server and s response transmission. This time is also known as Turn Around Time (or) Swing Time. Clien Server Proces t time to get first response 2) Response Time: The s from server process.

55

Request Ack Response

+ Response Time

3) Hits per second: The number of web requests received by web server in one second of time. 4) Throughput: The speed of web server to respond to that web request in one second of time. (KB/Sec).

Hits/sec Buffer/ Queue

Structure of application & Configuration Throughput Leaky Bucket Algorithm Note: We can use last two time parameters to estimate performance of web applications. I) Client/Server Load Testing:- LoadRunner allows you to conduct load testing on multiuser two-tier applications. If the project is in VB-Oracle or JAVA-Oracle. In this load testing, test engineers are using below components: Customer expected configured server computer Client/Server master build (Usability and Functionality testing is completed on that build) Remote Command Launcher Virtual User Generator Portmapper Controller Scenario Database Server

56

Test cases for Client/Server Load Testing:Client DSN Insert Update Delete Select Commit Rollback Server

Data Manipulation Language (DML) Transaction Control Language (TCL) Navigation:

Load Test cases

Assume customer expected configuration Start menu Programs LoadRunner Remote Command Launcher

Start Programs Corresponding database server (SQL Server, Oracle, Sybase, Quadbase)

Start Programs LoadRunner VUGEN File Menu Click NEW Select Virtual User Type as Client/Server Select database technology name (By default ODBC) Click OK Browse Master build path (C:\PF\MI\Samples\bin\flights.exe) Specify working directory (C:\WINDOWS\TEMP) Specify record into action (VUser_init, actions, VUser_end) Click OK Record our business operations per one user Click Stop Recording Tools Menu Create Controller Scenario Save VUser Script Enter number of VUsers Click OK 57

Transaction Point: LoadRunner is not able to return performance results, when no transaction found in VUser script. Due to this reason, test engineers are inserting transaction points to enclose required operations as actions part. Navigation: Select position on top of required operation Insert menu start transaction Enter transaction name Click OK Select position at end of the operation Insert menu End transaction Click OK Rendezvous Point:- It is an interrupt point in VUsers script execution. This point stop current process execution until remaining programs also executed up to that point. Navigation: Select position on top of transaction Insert menu Rendezvous Enter name Click OK Analyze result: LoadRunner returns performance results through a percentile graph.

Response Time

0 Formula:Average Response = Time

Percentage

100%

100% Work Completion Load

0% Transaction Starting Time

Increase Load:- (Stress Testing) LoadRunner allows you to increase load for existing transactions. Controller Scenario Group Menu Add VUser Select quantity to58 add Click OK

Performance Results Submission:- During load and stress testing, testing team submit performance results to project management in below format. Scenario (Select, update, delete, insert, commit, rollback) Time(milisec) Select Select Select Load Average Response

10 6 15 7.9 25 13.3 Up to Peak load Benchmark Testing:- After receiving performance results from testing team, project manager can go to decide whether the corresponding values are specifying good or bad performance. In this benchmarking, project manager compare current performance values depending on below factors. Performance results of old version Interest of customer site people Performance results of competitive products in market Interest of product managers If the current performance is not good, then development team is concentrating on changes in structure of application or improve configuration of the environment. Mixed Operations:- LoadRunner allows you to conduct testing on variant operations under variant loads. EG:- Select operation with load 10 and Update operation with load 10. Navigation: Controller Scenario Group menu add Group Enter group name Specify VUser Quantity Browse Script name Click OK Note1: In this multiple groups execution, test engineers are maintaining same name for Rendezvous point. Note2: One group users are waiting at Rendezvous point until remaining group VUsers come to same point. Note3: LoadRunner maintains 30 seconds as maximum time gap between two consecutive groups. *Note4: In general, test engineers are maintaining 25 VUsers for one group to get perfect performance results.

59

II) Web Load Testing:-LoadRunner allows you to conduct testing on three-tier applications also. In this web load testing, test engineers are using below components to create test environment. Customer expected configured Server computer Web master build (Project) Remote Command Launcher Portmapper Virtual User Generator Controller Scenario Browser (IE/Net Scape) Web Server Database Server Test cases for Web Load Testing:

Browser TCP/IP URL open Text Link Image Link Form Submission Data Submission

Web Server

DSN

Database Server

Web Load Test Cases

1) URL Open:- It emulates you to open a web site home page under load Function: web_url(Step name, URL= path of home page, Target frame= , LAST); 2) Text Link:- It emulates you to open a middle page through text link under load Function: web_link(Link text, URL=path of next page, Target Frame= , LAST); 3) Image Link:- It emulates you to open a middle page through an image link under load. Function: web_image(Image File name, URL= path of next page, Target Frame= , LAST); 4) Form Submission:- It emulates you to submit a form data to web server under load. Function: web_submit_form(Form name, attributes, hidden fields, ITEMDATA, Fields values, ENDITEM, LAST);

60

web_submit_form(Login, method=GET, action=http:\\localhost\dir\login.asp, sysdate, ITEMDATA, User_ID=xxxx, Pwd=xxx, Sysdate=xxx, ENDITEM, LAST) 5) Data Submission:- It emulates you to submit a formless (No form on desktop) or context less data to web server under load. Function:- web_server_data(Step name, attributes, ITEMDATA, Field Values, ENDITEM, LAST); Note 1:- In web load testing, test engineers are selecting E_Business web(Http/HTML) as VUser type. Note 2:- In web load testing, LoadRunner treat one action as one transaction, by default. To record above VUser script statements, we can follow below navigation. Select position in action Insert Menu New Step Select required operation (URL, Text Link, Image, Form Submission, Submit Data) Click OK Fill arguments Click OK

Analyze Results:- During web load testing, LoadRunner returns two extra time parameters to analyze results. Graphs Menu in result window Web Server resource graphs Hits Per Second & Through Put a) Hits Per Second:-

10 Hits/sec 5 1 2 3 4 5 6 Elapsed Time 61

b) ThroughPut:Performance Results Submission:- During web load testing, test engineers are reporting performance results to project manager in below format:

Benchmarking:- Form World Wide Web consortium standards, Link operations are taking 3 seconds, data related operations are taking 12 seconds under normal load.

QTP 9.2 (Quick Test Professional)


Introduction
Developed by Mercury Interactive. Scenario(URL,Link, Transaction Through put Image,Submit form, Load Time KB/sec Data submit) In Sec URL 10 2 118 URL 15 2 140 URL 25 2 187 URL 30 3 187 Peak Load Functionality Testing Tool. Supports .net, SAP, Oracle Apps, Multimedia like technologies. QTP records our manual test procedure into VB Script. Advanced keyword driven testing. Fast and reliable in testing process.

QTP Installation Process


Insert QTP CD. Double click on my computer. Double click on my disk. Double click on QTP 9.2 zip file. Extract the QTP 9.2 setup to a selected drive. Double click on crack for QTP folder. Double click on Mercury QTP 9.2 key zen folder. Extract to same drive. Double click on my computer. Select the drive where QTP 9.2 is unzipped. 62

Double click on QTP 9.2 folder. Double click on QTP CD FCS folder. Double click on Mercury QTP V9.2 LND folder. Double click on Legend folder. Double click on Install.txt notepad. Select and click copy and close the notepad. Double click on QTP CD FCS folder. Double click on setup.exe. Quick Test Professional menu displayed. Select Quick Test Professional setup. QTP 9.2 installation started. QTP 9.2 license agreement dialogue box displayed. Select I accept the terms in the license agreement. Press Yes button. QTP 9.2 registration information displayed. Enter user name, company name and maintenance number.

Note: Right click in the maintenance no. and paste the maintenance no. Click Next button. QTP setup box displayed. Click Yes (registration conformation). Set internet explorer advanced options dialogue box displayed. Select set the required options automatically. Click Next button. Choose destination location dialogue box displayed. (Required space to install QTP 235652 K.B). Review settings displayed. Setup status displayed. Customer registration displayed. Unselect the register now. Click Next button. Select updates displayed. Select do nothings with the found updates. Click Next button. Click Finish button. Unselect view readme file. Click Finish button. QTP 8.2 successfully installed.

63

QTP Starting Process


Select start menu on the desktop. Select programs. Select Quick Test professional. Select and click Quick test Professional. QTP add in manager dialogue box displayed. Click OK. Quick Test Professional welcome page displayed with the following options. 1) Tutorial: This displays the entire QTP testing process as a help document. 2) Start recording: It is an advanced option in QTP to record our manual test procedure into VB Script directly from the welcome page. 3) Open existing: This is used to open the existing test created.

4) Blank test: Select this option when the QTP starts. 5) Tip of the day: This displays the help on a short menu day to day by giving the tips during our QTP testing process. 6) Show this screen on startup: If you want to display the welcome page select this option or else unselect this option. Note: When we unselect the welcome screen, if we want to display the welcome screen again follow the below process. Select Tools menu in QTP window. Select and click Options in Tools menu. Select display welcome screen on startup in general tab. Clicks apply and click OK. Click Blank Test.

QTP Window QTP Window consists of different types of tool bars. 1) Title bar 2) Menu bar 3) File tool bar 64

4) Testing tool bar 5) Debug tool bar 6) Action tool bar 7) Test pane 8) Data table 9) Active screen 10) Status bar 11) Debug viewer Title bar: This tool bar displays the name of the testing tool (QTP) and name of the test created (by default untitled test) with minimize, maximize and close buttons in blue colour. Menu bar: This tool bar displays the buttons of commands for testing process in QTP with dropdown menus. This menu bar consists of different types of menus. File tool bar: This tool bar consists of buttons of commands from the file to start and stop the testing process using QTP.

New

Active debug test quality object save screen viewer settings center spy connection

Open

print

data results table

objects object repository

Testing tool bar: This tool bar consists of buttons of commands to create the test during testing process in QTP. This testing tool bar consists of below icons.

Record

Recording

Stop action

split

start transaction recording

65

Start Run

New insert stop action checkpoint transaction

Debug tool bar: This tool bar is used to run the test created smoothly without any errors.
Step Into step out clear all break points

Pause

step insert/remove Over break point

Action tool bar: This tool bar displays the individual actions we performed on our application during creation of one test. Action list Display selected action

Back Test pane: Test pane displays the Keyword view and Expert view. Data table: This is an excel sheet provided by the QTP to parameterize the values into variables. This data table consists of 2 tabs. 1. Global 2. Action Active screen: This option displays the snapshot of the application, this active screen displays the snapshot for the actions performed on our application.

66

Status bar: status bar displays the current status for the current test created. Debug viewer: This debug viewer displays the errors raised while debugging one test. Note 1: When we start the QTP active screen and debug viewer are not displayed in QTP window, to display this 2 options follow the below navigations. Select view menu in menu bar. Select and click Active screen and Debug viewer. Note 2: To display the file toolbar, testing toolbar and debug toolbar follow the below navigation. Select view menu. Select toolbars. Select file, testing, debug.

QTP Add In Manager


This option is provided by the QTP to load the supporting technologies. The QTP provides 3 built in technologies like as active X controls, visual basic and web by default. The Add In Manager displayed when we start the QTP. The Add In manager dialogue box consists of I. Add In: This displays the name of the technology loaded with one check box. We can select or unselect this check box. To conduct testing on the selected technology based application. II. License: This option displays the type of the license used for the technologies loaded. The licenses are of different types like as a. Built In: This license is provided by the Mercury Interactive for the technologies provided by the Mercury Interactive. This Built In license is a permanent license for the technology provided by Mercury Interactive. b. Limited license (Seat license): This license supports the technology loaded upto some extent as a trail versions by providing how many days remaining for this license to be outdated (minimum 14 days etc). If we load the technologies based on the seat license, the time remaining column in Add in manager displays the remaining time. c. Permanent: This type of licenses are given for the technologies loaded with out any time limit and the technologies loaded remains permanent. d. Not licensed: If a technology is loaded and if it is not licensed as permanent, the license column displays the Not Licensed.

67

e. Outdated licensed: if we load one technology and if it is not working even the license is updated, then the license column displays the outdated. III. Add In description: This option displays the summary of the loaded technology. IV. Modify Add -- In license: If a technology is not licensed, we can update the technology using the modify Add In license. V. Show on startup: This option is provided by the QTP as a check box, we can select or unselect this check box to display the Add In manager during the QTP starting process. If you are not interested to display the Add In manager unselect this option. Note1: If we are interested to display the Add In manger during QTP Starting process follow the below navigation. Tools menu select options. Select display Add In manager on startup check box in general tab. Clicks apply. Click Ok.

Note2: After the QTP window is started, if we want to know that technologies loaded into the Add In manager follow the below navigation. Help menu about Quick Test Professional. About Quick Test Professional dialogue box displayed. This displays the loaded Add Ins. Note3: If this license is not loaded for this technologies and if we want to display the licenses created for the technologies loaded, select license button. When we click license QTP license summary dialogue box is displayed. Note4: We can modify the license here by using modify license.

Recording Modes
During QTP testing process, test engineers record our manual test procedure into VB Script. There are 3 types of recording modes to record our business operations into VB Script.

1) General Recording 2) Analog Recording 3) Low level Recording


General Recording: This is context sensitive recording mode in winrunner. This recording mode is used to record the mouse and keyboard operations on our application. 68

Analog Recording: This recording mode is used to record the mouse moments w.r.t desktop co ordinates. This recording mode is used to record the images, electronic signals, digital signatures etc. During this Analog recording, test engineers should maintain constant resolution of the application. Low Level Recording: This recording mode is used to record the mouse moments w.r.t time operations. This recording mode is used to record the satellite applications and medical applications etc. General Recording navigation: Click record button in testing tool bar (or) select Test menu and select and click record (or) press F3 in keyboard. Record and run setting dialogue box displayed Note: The record and run settings dialogue box consists of 2 tabs. Web Windows applications Web: This option is provided in QTP record and run settings dialogue box to record the business operations based on web applications. This web tab consists of 2 options. Record and run test on any open window. Open the following browser when a record (or) run session begins. Note1: In the above 2 options, the first option is selected to record the business operations on any web applications. Note2: In the above 2 options, second option is selected to record the business operations of a particular web application. To record the business operations of a particular web application fill in the following browser details. 1) Type of the internet explorer (Ex: Microsoft internet explorer). 2) Address of the application (URL). Note3: After specifying the above 2 details, select the below options Do not record and run on browsers that are already open. Close the browser when the test closes. Click apply. Click OK. 69

Windows Applications: This option is provided in QTP record and run settings dialogue box. To record the business operations based on windows applications (C/S applications). In this windows applications tab there are 2 options to record the business operations. Record and run test on any open windows based application (default). o Record and run on these applications (Open and session start). Note1: In the above 2 options, the first option is selected to record the business operations on any windows applications. Note2: In the above 2 options, second option is selected to record the business operations of a particular window application.

To record the business operations of a particular windows application fill in the following window application details. Click Add. Application details dialogue box displayed. Browser the application path. Browser the working folder. Select the program arguments (optional). Click OK. Click Apply. Click OK.

Note1: Use edit and delete buttons to modify and delete the application selected. Note2: More than one window applications can be added to the application details, but in web application we can give one details only. General recording started with red color intimation in status bar. Note3: Once the general recording is started and stopped, record and run settings dialogue box dose not displayed up to a new test is started. Analog Recording navigation Start General recording. Select Test menu. Select and click Analog recording (or) ctrl + shift + F4.

70

Analog recording settings dialogue box displayed. Note: Select whether to record mouse operations related to a specified window or related to the screen. Analog recording dialogue box consists of the below options. Record relative to the screen (default). Record relative to the following window. Note1: In the above 2 options, select record relative to the screen to record the business operations performed on any web application. Note2: In the above 2 options, select record relative to the following window. When we select this option specify the window title using hand icon button. Hand icon displayed, select the window title in your application (double click).

Note3: Window title is captured from the application. Select Analog record (click). Analog recording started. Note4: To know that whether the Analog record is started or not QTP displays the Analog in status bar. Low Level recording navigation Start General recording. Select Test menu. Select and click low level recording. Low level recording started displaying Low Level in the status bar.

Sample Application starting process Mercury Interactive is providing sample application for testing using QTP. Mercury Interactive provides 2 types of sample applications. 1) Windows based application (Flight reservation). 2) Web application (Mercury tours site ). Navigation for windows based application Select start menu on desktop. Select programs. 71

Select Quick Test professional. Select sample applications. Select and click Flight. Login displayed. Enter agent name (any name). Enter password (mercury). Click OK. Flight reservation windows based application is displayed.

Note1: After the application is displayed on the desktop, adjust the application to the right side corner of the application using mouse. Note2: Drag the QTP window to the left side corner of the desktop. Navigation for web application Start Menu. Select Programs. Select Quick Test Professional. Select sample applications. Select and click Mercury tools website.

Test Pane The Test Pane consists of 2 tabs.

1) Keyword view 2) Expert view


Keyword view The keyword view is used to create the tests and to view the tests in a modular format. The keyword view displays the steps created in expert view as a row. Working with Keyword view: The Keyword view is used to create, view and modify the tests. The keyword view is displays the tests in a tree view. The tests created in keyword view are displayed in English. This helps us to understand the tests created in expert view (VB Script) into a easy process. The keyword view consists of different columns in a table format or modular format. The options displayed in keyword view are as below. 1) Item 2) Operation 72

3) Value 4) Documentation The above columns are displayed in a keyword view by default. Note: To add the more columns to the keyword view follow the below process. Adding Keyword view options: We can add, view, modify and delete the columns displayed in the keyword view by using the below navigation. Select Tools menu. Select and click keyword view. Keyword view options dialogue box displayed. Note: In tools menu Keyword view options option is displayed only when we are into keyword view. If we are creating the tasks in expert view, this option is not displayed in tools menu. Keyword view options dialogue box: This dialogue box consists of 2 tabs. 1) Available columns 2) Visible columns Available columns: This tab consists of required columns to be added into the keyword view. To add one column follow the below navigation. Select the existing available column in available column tab. Click > button to add new column into keyword view. Note: When you follow the above procedure a new column is added to the keyword view. Visible columns: This tab displays the existing columns in keyword view and also newly added columns from available columns tab. Note: If we want to remove one column from keyword view follow the below navigation. Select one column name in visible column tab. Click < button to remove one column from keyword view. Note1: If we want to add all the columns which are displayed in the available columns tab follow the below navigation. Click >> (Add+) button. 73

Note2: If we want to remove all the columns which are displayed in the visible column click << (remove --) button. Note: If we want to display the columns in the keyword in an order as you like use the up and down arrows available in keyword view dialogue box. Click OK. The keyword view consists of the following columns. 1) Item: This is name of the object of the following columns where we are performing our business operations in the application.

This item column is displayed with Action as the parent step for all the items displayed in a tree view. We can expand and collapse the tree view to display on which items we performed our operations. We can expand and collapse the items displayed by using + and icons. 2) Operation: This column displays the type of operation we performed on the item. Ex: Click, Select, Press etc. 3) Value: This column displays the arguments (true or false). The arguments displays the whether the operation we performed on our application is passed or failed and also displays the expected values in terms of 0 and 1. 4) Documentation: This column displays the operation are performed on our application. 5) Comments: This column displays the name of the window in red color, where we performed our business operations. 6) Assignment: This column displays the no. of variables used in VB Scripts.

Expert view
The expert view in the test pane is used to create the test in VB Script. The tests are created by recording the business operations performed on our application into steps. While recording the business operations the following objects and windows are recorded. Edit box / Text x Web Edit

74

List box

Web List Web Button

Push Button

Check box

Web Check box

Radio button

Web Radio Group Win Edit

Edit box / Text x

List box

Win List

Push Button

Win Button WinCheckbox

Check box

Radio button

Win Radio Button

Check Points: During recording the business operations the steps are created in the
expert view in VB Script. To test the current behavior of one application with its earlier behavior check points are used. The check points are of 8 types in QTP.

1) Standard checkpoint 2) Bitmap checkpoint 3) Text checkpoint 4) Text Area checkpoint 5) Accessibility checkpoint
75

6) Database checkpoint 7) XML checkpoint (file) 8) XML checkpoint (web)


Standard Checkpoint This checkpoint is used to check the object properties. In general a standard checkpoint is used for starting the every checkpoint. Ex: Flight Reservation (Windows application) Requirements: Focus to window Update order button disabled Open order Update order button disabled Perform change Update order button enabled Objects & Properties: Edit box / Text x height, focused, -------enabled, disabled, regular expression, width, Value, X, Y. List box .. -------enabled, disabled, focused, count, value

Push Button Y

---------

enabled, disabled, focused, width, height, X,

Check box

---------

enabled, disabled, focused ..

Radio button

--------

enabled, disabled, focused ..

Navigation for Flight reservation application: Click record in QTP window

76

General recording started Select file menu in application. Select and click open order Select order no. Enter order no (1 to 10) Click Ok. Perform change in opened order. Click update order button. Stop recording. Keep application in blank mode. (Click new in application).

Navigation for Checkpoints Insertion: Follow the requirements and insert checkpoints. Note: In QTP, to insert one checkpoint general recording is to be started. If not, checkpoints are not displayed in insert menu. Start recording. Select insert menu. Select checkpoint. Select and click Standard checkpoint (F12). Hand icon displayed. Click on the testable object in application using this hand icon. Object selection Checkpoint properties box displayed.

Note: This box displays the location of the object you selected associated with several objects and windows. Click OK. Checkpoint properties dialogue box displayed. Checkpoints properties dialogue box: This dialogue box displays the objects properties. We can select one property to test as given in the customer requirements. This dialogue box consists of the below options. 1) 2) 3) 4) 5) Name of the object Class of the object Type, Property, Value table Configure value Checkpoint timeout

Type, Property, Value: 77

In checkpoint properties dialogue box in QTP (Type, Property, Value) table is displayed. In this table we can select the properties based on customer requirements. Note: If the customer requirements is enabled or disabled, select6 the checkbox given for enabled property in this table. Value: This value option is provided based on the requirement for particular property. We can change this value in configure value. Configure value: The configure value consists of 2 options. 1) Constant 2) Parameter Note1: In the above table, we can change the property values using constant option in configure value. Note2: As per customer requirements, change the value of the property in constant. We can change the value in constant by using the list box for the constant option. (True / False). Click Ok. Checkpoint inserted. Note: Insert the order checkpoints by following the above process and by using the customer requirements. Click Run Run dialogue box displayed Click OK Note: This Run dialogue box displays the results location of the current test created. Test results window displayed Example: Flight Reservation VB Script for update order button. Requirements: Focus to window Update order button disabled Open order update order button disabled Perform change Update order button enabled VB Script: 78

Window (Flight Reservation). WinMenu (Menu). Select File; Open Order--- Window (Flight Reservation). WinButton (Update Order). Check CheckPoint (Update Order) Window (Flight Reservation). Dialog (Open Order). WinCheckBox (Order No.). Set ON Window (Flight Reservation). Dialog (Open Order). WinEdit (Edit). Set 1 Window (Flight Reservation). Dialog (Open Order). WinButton (Ok). Click Window (Flight Reservation). WinButton (Update Order). Check CheckPoint (Update Order_2) Window (Flight Reservation). WinRadioButton (First). Set Window (Flight Reservation). WinButton (Update Order). Check CheckPoint (Update Order_3) Window (Flight Reservation). WinButton (Update Order). Click Test Results Window This results window displays the status for the current test created in pass / fail criteria. This test results window consists of two tabs. 1) Test untitled test summary (in tree view) 2) Untitled test results summary To know whether the current test created is passed or failed, (i.e.) whether the expected is equal to actual or not, we can know through the checkpoints created as a result of pass (or) fail. To know whether the current test is passed (or) failed expand the tree displayed in the left side corner in the test result window. Untitled Test Result Summary: This tab displays the test name, results name, time zone, run started and run ended, status (pass, failed, warnings) and the no of times the test is passed or failed. Note: This tab is also displays the name of the iteration failed.

79

Understanding the VB Script syntax


In QTP, to create one test VB Script is used to for easy purpose. To use the VB Script in QTP the following tasks are to be followed. 1) Case Sensitive: VB Script used in the QTP is not case sensitive. Ex: Window (Flight Reservation). WinButton (Update Order) (right) Window (flight reservation). Winbutton (update order). (Wrong) 2) Test String: QTP VB Script allows test based strings within the double quotes. Whenever a text string is inserted in VB Script, it must consist of double quotes. Ex: Window (Flight Reservation). WinButton (Open Order). WinEdit (Edit). Set XXX Test String Test String 3) Parenthesis: VB Script in QTP allows the text string must be inserted in Parenthesis. 4) Comments: A comment describes the operations performed on the application during the creation of tests. This comment describes the script created in a simple language like English. Ex: comment: In sample window ok button is clicked Script: Window (Sample). WinButton (Ok). Click 5) Spaces: In QTP VB Script allows blank spaces between the test created. 6) Variables: QTP VB Script allows the parameterization of the values into variables. Parameterization means string the values into variables. To call them during the running session of the test created. Understanding the variables You can specify the variables to the test objects are simple values. A variable describes the test objects or simple values by following the object hierarchy. In VB Script variables are started with set statement. Test String Test String

80

Set objvar = obj hierarchy


Ex: Set UserEditbox = Window (Flight Reservation) WinEdit (username) UserEditbox. Set XXX For Next Statement In VB Script the For ---- Next loop is used to execute the sequential statement. The For Next statement in VB Script is followed by below syntax. Syntax: For counter = Start to End {step1. step2} Statement Next Ex: For User Edit box = 1 to 8 {1} Window (Flight Reservation). WinEdit (Username). Set UserEditbox Next Do Loop Statement Do Loop statements enable the Quick test to execute a statement or a series of statement, while condition is true or until the condition is true. Syntax: Do [{while} {until} condition] Statement End While Statement While statement is used to execute a statement or a series of statements while a condition is true. Syntax: While {condition} Statement End

81

If then else Statement This statement is used to execute a statement or a series of statement. If you want to continue another statement to execute else is used Syntax: If {condition} then Statement Else if {condition2} Statement Else {condition3} Statement End

With Statement
To execute a group of statements into a single statement. Syntax: With {statement 1, statement2, ---------------} End EX: With Window (sample) WinEdit (Username) WinButton (Ok) End. Object Repository Before starting the QTP testing process, test engineers concentrate on creating the Object Repository (GUI Map). The object Repository is created to recognize the objects, windows and browsers into QTP. The creation of objects and windows using OR helps the QTP testing process in a simple way. Navigation: Select Tools menu. Click Object Repository

82

Object Repository dialogue box displayed. The Object Repository dialogue box consists of the following tabs. 1) Action 2) Properties 3) Configure Value 1) Action: The Action tab is to display the objects, windows and browsers recognized from application in a tree view. To add (recognize the objects and windows into this action tab follow the below navigation. Click Add objects button. Hand icon displayed. Double click on the application window or browser title bar. Object selection Add to repository box displayed

Note: This box displays the location you clicked which is associated with several objects. Click OK. Add object to object repository dialogue box displayed. Note: This dialogue box is used to select which objects to add to OR by using the below options. Only the selected object Selected object and all its direct children. Selected object and all its descendants. In the above 3 options select the objects and all its descendants. Click OK. Note1: All the objects and windows are added to the Action tab in a tree view. Note2: After completion of reorganization of objects and windows, if we want to view the particular object and window which are recognized from our application follow the below navigation. Expand the tree view in Action tab. Select one object. Click highlight button, which point outs the selected object or window in our application. Object Spy 83

QTP provides an option to spy the objects and windows properties in our application by using Object Spy (GUI Spy). Note: To open the Object Spy follow the below navigation. Click Object Spy button in Object Repository (OR) Select Tools menu. Select and click Object Spy. Object spy dialogue box displayed. Click on hand icon button. Note1: The Object spy consists of a pointing hand button to select the object or window whose properties you want to view. Note2: The Object spy consists of 2 options. Run time object properties. Test objects properties. In the above 2 options, select the second option to test the recognized objects or windows into Object Repository. Click on hand icon button. Hand icon displayed. Select the object or window you want to spy the properties in your application. Note: When we select the objects or windows in our application using this hand icon, the properties and methods of the particular object are displayed in the Object Spy. 2) Properties: This option is provided in OR to display the properties of objects and windows recognized into QTP. This properties option displays the name of the property, type of the property and expected value for that property. 3) Configure Value: This option is provided in OR to change the property value for the objects recognized. These values are called constants (true/false). Bitmap Checkpoint: This checkpoint is used to test the static and dynamic images. In QTP, a bitmap checkpoint is also used to test digital signatures like application in analog recording mode. Navigation: Click record in QTP window. Record and Run settings dialogue box displayed. 84

Select Web tab. Select open the following browser when a record or run session begins. Enter the browser details (type, address). Clicks apply. Click OK.

Text Checkpoint Text checkpoint is used to test the calculations and text base tests. We can test the objects text in one application using text checkpoint. Navigation: Start recording. Select insert menu. Select checkpoint. Select and click text checkpoint. Hand icon displayed. Select the text we want to test of one object. Text checkpoint properties dialogue box displayed.

Note1: This text checkpoint properties dialogue box consists of checkpoint summary, configure (constant and parameter). Note2: There is an option to ignore the spaces in between the text displayed. (Select ignore spaces checkbox). Note3: If u wants to test the text exactly to meet the requirement select exact match. Click OK. Text checkpoint inserted. Stop recording. Click Run. Test results window displayed. Expand the untitled test iteration. Click on checkpoint.

Text Area Checkpoint This checkpoint is used to check the screen area of the text. Navigation: Start recording. Select insert menu. 85

Select che4ckpoint. Select and click text area checkpoint. Cross hairs icon displayed. Select the area of the text we want to test of one object. Test area checkpoint properties dialogue box displayed. Click OK. Text area checkpoint inserted. Stop recording. Click Run. Test results window displayed. Expand the untitled test iteration. Click on checkpoint.

Database Checkpoint This checkpoint is used to correctness of the backend tables. This checkpoint is created either by using MS query or by specifying the SQL statements. Navigation: Start recording. Click insert menu. Click checkpoint. Click database checkpoint. Database query wizard dialogue box displayed.

Note: To connect to the database using ODBC (Object Database Connectivity). Select one of the below option. Create query using Microsoft query. Specify SQL statement manually. In the above two options, select the second one to get the data from database. Click next. Specify SQL statements displayed. Create connecting string (by using the create button). Click Create. Select data source dialogue box displayed. Select machine data source tab. Select the DSN (data source name)

Note1: The DSN is provided by the development team to test the database (Ex: Flight 32).

86

Note2: Flight 32 is the DSN for the Flight Reservation application. Click OK. Specify SQL statement (select * from orders) Click finish. Database checkpoint properties dialogue box displayed.

Note3: This dialogue box consists of expected data, settings and cell identification. In expected data test engineers are specifying the constant values as expected values. Note4: For the constant, constant value options browse button is displayed. When we click this browse button constant value options dialogue box displayed. Enter the expected value or use regular expression to display all the orders in an order. Navigation to test the database using MS query: Start recording. Select insert menu. Select and click checkpoint. Select database checkpoint. Database query wizard dialogue box displayed. Select Create query using Microsoft query. Click next. Microsoft query window displayed and choose data Source dialogue box also displayed.

Note: This dialogue box consists of 2 tabs. 1) Database 2) Queries Select database tab. Select DSN given by the development team. Click OK. Query wizard choose columns dialogue box displayed.

Note: What columns of data do you want to include in query select those columns. Click add > button. Note: All the columns in the table are added to a separate tab (columns in your query tab).

87

Click next. Query wizard filter data dialogue box displayed. Note: Select the columns you want to retrieve from the displayed list. Click next. Sort order dialogue box displayed. Click next. Query wizard finish dialogue box displayed. Select View data or edit query in Microsoft query. Click finish.

Step Generator This option is provided in QTP to know the unknown functions for each and every object and window. The step generator consists of the following options. 1) 2) 3) 4) 5) 6) 7) Category Library Operation Arguments Generated steps Return value Insert another step

Category: This is a list box provided by the step generator in QTP. This shows us different categories of functions like as test objects, utility objects and functions. Library: there are different types of functions available in QTP like as library functions, built in functions and local script functions. Operation: Operation displays the different types of operations we performed on the application. Arguments: The arguments are used to assign the expected value of particular property selected. Navigation: Select category (Test objects). Select object (Click the browse button displayed for this object). Select object for step dialogue box displayed. Use the pointing hand icon to select the object and to generate the step for that object. Object selection dialogue box displayed. Click ok.

88

Note: The object selected is displayed in the object list box. Select the operation (click, set) required for the object selected. The step is generated in the generated step box. Click OK. Navigation for step generator: Select insert menu. Select step. Select and click step generator (or) F7. Step generator dialogue box displayed.

Object Identification This option is available in QTP to know the properties of different objects and to identify the hiding properties. Navigation: Select Tools menu. Select object identification. Object identification dialogue box displayed. The Object Identification displays the properties of an object by selecting the environment. This environment consists of different types of objects like as active X controls, standard windows, visual basic and web. **********************************THE END*****************************

Security Testing: This testing technique is complex to be applied. During this test,
testing team concentrate on Privacy to user operations in our application build. This testing technique is classified into below sub tests a) Authorization: Whether a user is authorized or not to connect to an application? b) Access Control: Whether a valid user has permission to use specific service or not? c) Encryption/Decryption: Encryption Client Decryption Server Response Cipher Text 89 Converted to Original Text

Request

Original Text

Converted to Cipher Text

Here sender i.e. client performs encryption and receiver i.e. server performs decryption. Note: In small scale organizations, authorization and access control are covered by the test engineers. Developers perform the encryption/decryption procedures.

Introduction
Test Management Process

Define Test Requirements

Develop Test Plan

Execute Tests

Track Defects

The four modules of Quality Center Requirements, Test Plan, Test Lab, and Defects map directly to the following stages of the testing process: Define test requirements: This phase involves identifying and validating the functional and performance requirements that need to be tested. Develop test plan: This phase involves planning and confirming which tests need to be performed and how these tests must be executed. Execute tests: This phase involves organizing test sets, scheduling their executions, performing test runs, and analyzing the results of these runs.

90

Track defects: This phase involves reporting defects that were detected in the application and tracking the repairs. Quality Center provides reports and graphs for performing test data analysis and gathering status updates across all stages of the testing process.

Defining Test Requirements


Defining Test Requirements Purpose ..This chapter covers the first stage of the test management process. The chapter describes the steps to define, create, and modify test requirements using four different views.

Objectives After completing this chapter, you will be able to: Identify the characteristics of a useful test requirement. Define the testing requirements of an application. Build a requirements tree. Create requirements. Track the status of requirements. Mercury Education - Not For Duplication What are Test Requirements? Test requirements describe in detail the components that need to be tested in an application and provide the test team with the foundation on which the entire testing process is based. Define test requirements clearly and correctly at the beginning of a project has the following advantages: Aids development and testing: Clearly defined test requirements help developers set a target for themselves. Test requirements also provide guidelines to the testing team to identify their testing priorities. Helps prevent scope creep: Documented requirements are the best defense against Scope creep, where requirement documents are continually amended and appended, impeding the software development and testing efforts. You need to avoid this kind of a moving target, and instead clearly define the goal at the start of the project. You can then use that goal as a reference to focus on individual efforts. Sets clear expectations between teams: Defining requirements and getting them signed off by relevant stakeholders is the best way to ensure that expectations have

91

been agreed upon by all parties involved -- product marketing, customer service, IT, and documentation. Make sure that all necessary parties are represented in the creation of the requirements. Confirm and validate the expectations of everyone involved in the project. Saves time and money: "Measure twice, cut once" is a phrase used in carpentry, but it applies to requirement development as well. Measuring a piece of wood to prepare for a cut does not take much time, but if you do not do it right, you may have to scrap what you are doing and start over. If you do not want to waste your time and money later in your project, take out time at the beginning to invest in your requirements.

Characteristics of a Useful Test Requirement A useful test requirement is always: Unique: Is this the only requirement that defines this particular objective? Precise: Are there any vague words that are difficult to interpret? Bounded: Are there concrete boundaries in the objectives? Testable: Can you build one or more test cases that will completely verify all aspects of this requirement? What is a Requirements Tree? Requirements Tree Quality Center helps you define requirements for the testing process. The test requirements are arranged hierarchically. Quality Center provides this functionality through the REQUIREMENTS module. The REQUIREMENTS module enables you to build a requirements tree to outline and organize the test requirements of a project. Building this tree makes it easy for you to define and manage hierarchical relationships between requirements. These four views are: REQUIREMENTS TREE view this view enables you to view requirements in a tree.

92

REQUIREMENTS GRID view this view enables you to display requirements in a flat, non-hierarchical view. REQUIREMENTS COVERAGE view this view enables you to view requirements based on their association with tests and defects. COVERAGE ANALYSIS view this view enables you to analyze the breakdown of child requirements according to their test coverage status. REQUIREMENTS TREE view provides a tabulated list of the requirements data. You can use this view to add requirements within the hierarchy. The REQUIREMENTS TREE view displays the parent-child relationship between test Requirements. This helps in analyzing test requirements with respect to their position in the requirements hierarchy. Using the Requirements Coverage View The REQUIREMENTS COVERAGE view enables you to define and track relationships between requirements and other entities, such as tests and defects. The REQUIREMENTS COVERAGE view has the following four tabs: TEST COVERAGE: This tab displays the tests to which a requirement is linked and the execution status of these tests. ..When you switch to the REQUIREMENTS COVERAGE view, the TEST COVERAGE tab is clicked by default. When you click the TEST COVERAGE tab, two new tabs appear towards the right Side of the screen. These tabs are: TEST PLAN TREE: This tab displays the hierarchy of tests within the test plan. TEST SET TREE: This tab displays the hierarchy of tests within a test set. DETAILS: This tab displays the details of the requirement, such as its PRIORITY, TYPE, AUTHOR, and CREATION DATE. ATTACHMENTS: This tab displays the files that have been attached to a requirement. LINKED DEFECTS: This tab displays the defects that were logged for a requirement. Mercury Education - Not For Duplication Using the Coverage Analysis View

93

The COVERAGE ANALYSIS view provides a chart that shows the requirements that are directly associated with tests and the execution statuses of these tests. The COVERAGE ANALYSIS view is a graphical representation of the REQUIREMENTS COVERAGE view.

Test Planning
Test Planning Test Planning the second stage of the testing process is test planning. After test requirements are approved, the testing team designs tests that validate whether the application meets these requirements. The efficiency of the test runs depends on how well you plan and decide the tests that need to be performed and how these tests should be executed. Elements of the Test Plan Module The test planning tools in Quality Center are organized into the following elements of the TEST PLAN module: DETAILS tab: Enables you to enter descriptions for the subject folders and tests. DESIGN STEPS tab: Enables you to specify the steps for each test. TEST SCRIPT tab: Enables you to design scripts for automated tests. ATTACHMENTS tab: Enables you to add attachments to subject folders or specific tests. REQ COVERAGE tab: Enables you to link tests to requirements. LINKED DEFECTS tab: Enables you to link tests to defects. ..All the tabs listed above are displayed when a test is selected from the TEST PLAN tree.

Test Planning Creating a Subject Folder


The TEST PLAN tree starts with the SUBJECT folder. From the SUBJECT folder, you create main subject folders and add subject sub-folders within each main folder. To add a folder: 94

1. From the TEST PLAN tree, select the SUBJECT folder to create a main subject folder. Note: You can select an existing main folder to create a sub-folder. 2. Click NEW FOLDER. The NEW FOLDER dialog box appears. 3. In the FOLDER NAME field, type a name for the new test subject. Note: A folder name cannot include any of the following characters: \, ^, or *. 4. Click OK to add the folder to the TEST PLAN tree. Folders can contain sub-folders, and each sub-folder can have sub-folders. Each folder or sub-folder can contain a maximum of 676 sub-folders.

Adding a Test
To add a test to the TEST PLAN tree, you need to define the basic information about the test. To add a test: 1. From the TEST PLAN tree, select the subject folder in which you want to add the new test. 2. On the Quality Center toolbar, click NEW TEST. The CREATE NEW TEST dialog box appears. 3. Select a TEST TYPE and type a name for your test in the TEST NAME field. Note: A test name cannot include any of the following characters: \, /,:, , ?, <, >, |, *, %, or . 4. Click OK. The REQUIRED FIELDS dialog box appears. 5. Enter the appropriate values for the required fields. 6. Click OK to add the test to the selected subject in the TEST PLAN tree. Mercury Education - Not For Duplication

Adding Test Steps


Add Test StepsWhen all of the individual tests are laid out, the next task is to specify the detailed steps to execute each test. Adding a test step involves specifying the actions to perform on the application, the input to enter, and the expected output. To add and define a test step: 1. Open the TEST PLAN module. 2. Select a test and click its DESIGN STEPS tab. 3. Click NEW STEP. The DESIGN STEP EDITOR dialog box appears.

95

4. Type a name for your step in the STEP NAME field. 5. In the DESCRIPTION field, type the instructions that need to be carried out in this Step. 6. In the EXPECTED RESULT field, type a description of what should be expected after this step is completed. 7. Click OK when done. The test steps appear in the DESIGN STEPS tab. A footprint Icon appears next to the test name in the TEST PLAN tree. This icon indicates that Steps are defined for the test. Calling a Test Call Another TestYou can build the steps of a test to include calls to other tests. This enables you to modularize and reuse a standard sequence of steps across multiple tests. For example, To call another test as a step within a test: 1. Click the DESIGN STEPS tab of the calling test. 2. Click the CALL TO TEST button. The SELECT A TEST dialog box appears. 3. Select the test that you want to call and click OK. This adds a step in the current test And labels it as CALL <TEST_NAME>. If you call a test that has unassigned Parameters, then the PARAMETERS OF TEST dialog box appear. You now assign the Parameters values. Using Parameters A parameter is a variable that can be assigned a value from outside the test in which it is defined. Parameters provide flexibility by enabling each calling test to dynamically change its values.

Test Execution
..This chapter covers the tasks performed from the TEST LAB module -- creating test sets, scheduling and running test executions, and recording test results. The third stage of the test management process involves organizing tests into test sets and running the tests. After running the tests, you have a complete documentation that lists the inconsistencies, issues, and defects in your application. You can subsequently report these problems into a defect tracking system for further investigation, correction, and retesting.

96

Using the Test Lab Module


You perform all test execution tasks from the TEST LAB module. You use the TEST LAB module to create test sets and perform test runs in Quality Center. To navigate to this module, click the TEST LAB icon from the Quality Center sidebar. What is a Test Set? A test set is a group of tests designed to achieve specific testing goals. For example, you can build a test set to include tests that validate a specific functionality. You can also build a test set to include tests that verify that an application works in a particular environment.A test set can contain a combination of manual and automated tests. You can add a test multiple times to the same test set and across different test sets, so that you can reuse routines. The test sets tree organizes and displays your test sets hierarchically. Developing a test sets tree helps you organize your testing process by grouping test sets into folders and organizing the folders in different hierarchical levels. The test sets tree can contain folders at the root level to indicate the general classifications of test sets. The folders can contain subfolders that further classify the test sets in each hierarchy.

Creating a Test Set Folder The TEST SETS tree always starts with the ROOT folder. In this folder, you can create your main folders, and add subfolders. To add a folder: 1. From the TEST SETS tree, select the ROOT folder to create a main folder, or select an existing folder to create a subfolder. 2. On the toolbar, click NEW FOLDER. The NEW FOLDER dialog box appears. 3. In the FOLDER NAME field, type a name for the new folder. Note: A folder name cannot contain any of the following characters: \, ^, or *. 4. Click OK to add the folder to the TEST SETS tree. Note: Folders can contain subfolders, and each subfolder can contain further subfolders. Each folder or subfolder can contain a maximum of only 676 subfolders. Creating a Test Set

97

To create a test set: 1. From the TEST SETS tree, select the folder to where you want to add the new test set. 2. Click NEW TEST SET. The NEW TEST SET dialog box appears. 3. Type a name for the test set in the TEST SET NAME field and its description in the DESCRIPTION field. Note: A test set name cannot include any of the following characters: \, ^, or *. 4. Click OK to add the new test set to the TEST SETS tree. Elements of the Test Lab Module The TEST LAB module in Quality Center consists of the following test execution elements, which you use to provide information about test sets: EXECUTION GRID tab: Enables you to declare the tests that make up each test set, run tests, and review the results of these executions. Displays test data in a grid. EXECUTION FLOW tab: Displays the test data in a diagram and provides drag-anddrop functionality for adding, sequencing, and scheduling tests. TEST SET PROPERTIES tab: Enables you to define additional test execution parameters and requirements. LINKED DEFECTS tab: Enables you to view the defects that are associated with a test. You can also add new defects to a test. LIVE ANALYSIS tab: Enables you to generate a graphical representation of the different fields associated with a test. When you update a test in a folder, the data change is reflected in the graph without manually regenerating the graph. Providing Additional Information for a Test Set To add additional information to a test set: 1. From the TEST SETS tree, select a test set. 2. Click the TEST SET PROPERTIES tab. 3. Click the DETAILS link. 4. In the TEST SET INFORMATION section, type the additional information that you need to specify. Note: Use the ATTACHMENTS button to add attachments to a test set. This provides the same procedures and options for adding attachments as in the REQUIREMENTS module.

98

Adding Tests to a Test Set After building your TEST SETS tree, you select and add the tests to each test set. To add tests to a test set: 1. From the TEST SETS tree, select a test set. 2. Click the EXECUTION GRID tab and click SELECT TESTS. The TEST PLAN TREE tab appears on the right side of the screen. 3. Under the TEST PLAN TREE tab, click a test folder to add an entire group of tests or click a test name to add a specific test. 4. Click ADD TESTS TO TEST SET. This adds the test to the test set and prefixes a number to its name. This number indicates the sequence when an instance of this step is added to the same test set. Note: If you select a folder containing tests that are already included in the test set, you are prompted to choose the tests in the folder that you still want to add. Additionally, if the tests that you add have unassigned parameters, you are prompted to enter their required values. You can also drag and drop tests from the TEST PLAN tree to the EXECUTION GRID tab.

Running Tests Manually To manually run and record the results of a test: 1. From the TEST SETS tree, select a test set. 2. Click the EXECUTION GRID or EXECUTION FLOW tab and select multiple manual tests...You need to hold down the SHIFT key to select multiple tests. 3. On the Quality Center toolbar, click the RUN arrow and select RUN MANUALLY. The MANUAL TEST RUN dialog box appears. 4. Select MANUAL RUNNER and click OK. The MANUAL RUNNER dialog box appears. Note: Use the RUN DETAILS section to record information about the current run and the TEST DETAILS section to make changes to the tests before running them. 5. To start the test run, click BEGIN RUN. The MANUAL RUNNER dialog box appears. Note: If the tests contain unassigned parameters, you will be prompted to assign values to the unassigned parameters in the PARAMETERS VALUES FOR RUN dialog box. Type the parameter values and click OK. The MANUAL RUNNER dialog box appears.

99

6. Perform the test step as outlined in the DESCRIPTION field of MANUAL RUNNER dialog box. 7. Record the status and actual result of each step using the provided fields. 8. To end the test run, click END RUN. Note: To run a test whose status is NOT COMPLETED, on the Quality Center toolbar, click the RUN arrow and select CONTINUE MANUAL RUN. Recording Results of Manual Tests While recording the results of a manual test run, you use the: COMPACT VIEW button to individually view and update the DESCRIPTION, EXPECTED RESULT and ACTUAL RESULT fields of each test step. STATUS column to record the execution status of a test. ACTUAL field to record additional details about the completion of a test step execution.

Defect Tracking
Managing and tracking application issues is a critical step in the testing process because it involves a lot of time and money. If application issues are not tracked correctly, additional effort is required by the testing team to track these issues at a later stage. This additional effort may impact the testing lifecycle. Quality Center provides a central defect tracking system that both the testing and development teams can use to collaboratively resolve defects. Using the Defects Module The DEFECTS module in Quality Center provides a complete system for logging, tracking, managing, and analyzing application defects. To navigate to this module, click the DEFECTS icon from the Quality Center sidebar. The DEFECTS module screen is displayed, which lists the various defects and their attributes. Module Quality Centers defect tracking tools are organized into the following elements of the DEFECTS module: DEFECTS grid: Provides a tabular list of the defects submitted for a project. Grid filter: Provides filter fields from where you can define a criterion for filtering the data listed in the DEFECTS grid. DESCRIPTION tab: Provides a text box for entering and reviewing annotations for each defect. ATTACHMENTS tab: Enables you to attach files, snapshots, URLs, system information, and clipboard images for providing a better explanation for a defect.

100

HISTORY tab: Provides an audit trail of changes made to each defect.

Logging a Defect
New Defect Quality Center enables you to log defects at any stage of the testing process. Whether you are defining test requirements, building a test plan, or executing test runs, each Quality Center module you work with provides a common tool for logging defects. To log a defect: 1. In the DEFECTS module, click NEW DEFECT. The NEW DEFECT dialog box appears. The NEW DEFECT dialog box may contain data fields and multiple tabbed pages that your Quality Center administrator may have custom-defined for your project. 2. Type the appropriate information to describe the defect. Besides filling in the data fields in the NEW DEFECT dialog box, you can also add attachments to a defect to provide further information about the defect. 3. Click SUBMIT to save the defect to the DEFECTS module. To log another defect, use the refreshed NEW DEFECT dialog box to type new information. Otherwise, click CLOSE. Defect-Test Relationship To ensure traceability throughout the testing process, defects can be associated with tests. This is a direct link between a defect and a test. A defect may be indirectly linked to a test through other entities, such as a test instance, a test run, or a test step. When a defect is associated with a test, it can be easily traced in all instances of the test. These test instances may be in the same test set or in different test sets. For example, let us consider a test with the name Test_01. Let us assume that requirement R_01, which we created in the topic Defect-Requirement relationship, is associated with Test_01. Therefore, Defect_01, which we had linked to R_01, will now be associated with Test_01. Now there are three instances of Test_01 in three different test sets. Therefore, whenever an instance of this test is executed in any of the three test sets, Defect_01 will be indirectly linked to that particular test instance. The defect-test association has the following features: Tests from the TEST PLAN module can be associated with defects that have been logged in the DEFECTS module. This association allows you to use the status of the defects as the basis for determining if and when the tests should be run. Additionally, the requirements covered by these tests are also automatically associated with their corresponding defects. For example, you may decide to run a test only if the defect status is CLOSED. This means that the defect has been fixed by the development team, but is pending for

101

your review. This will ensure that a protocol is defined for communication between the development and testing teams, thus minimizing time required for re-work. Defects logged during a manual test run are automatically associated with that specific test run. Note: The MANUAL RUNNER TEST SET dialog box is used to log defects during a test run. A test can be associated with multiple defects. Adding Defects with a Test To associate a defect with a test: 1. Navigate to the TEST PLAN module to view the TEST PLAN tree that opens on the left side of the screen. 2. Select a test from the TEST PLAN tree. 3. Click the LINKED DEFECTS tab and click ADD AND LINK DEFECT. The NEW DEFECT dialog box appears. Type the appropriate information in the required fields and click SUBMIT to add the defect. 4. To add an existing defect to a test instance, click LINK EXISTING DEFECT. You can type a Defect ID or click SELECT to select a defect from the DEFECTS grid.

Reporting and Analysis


You can generate reports and graphs within each Quality Center module to track and assess the progress of your project at any stage in the testing process. The REQUIREMENTS, TEST PLAN, TEST LAB, and DEFECTS modules of Quality Center provide predefined report and graph templates. You use these report and graph templates to retrieve the information you want to analyze. Generating a Report To generate a report: 1. Navigate to the Quality Center module that has the data that you want to use for the report. 2. From the menu bar, select ANALYSIS. REPORTS. A new menu appears that lists the types of reports available in the current module. 3. Click the report type you want to run. After the report generation task is complete, the report output is displayed in the current window. Quality Center Graph Types Quality Center graphs help you analyze the progress of your work. These graphs also

102

help you analyze the relationships between the data that your project has accumulated throughout the testing process. The following graph types are available in Quality Center: SUMMARY graphs: Each Quality Center module provides summary graphs specific to the tasks that it supports. This graph type shows the total count of requirements, tests, tests in TEST SETS, or defects that were defined throughout the testing process. PROGRESS graphs: Each Quality Center module provides progress graphs specific to the tasks that the module supports. This graph type shows the accumulation of requirements, tests, tests in TEST SETS, or defects over a specific period. TREND graphs: The REQUIREMENTS, TEST PLAN, and DEFECTS modules provide trend graphs specific to the tasks they support. This graph type shows the history of changes to specific fields over a specific period. REQUIREMENTS COVERAGE graphs: This graph type is specific to the REQUIREMENTS module. It shows the total count of requirements, grouped by test coverage status. DEFECTS AGE graphs: This graph type is specific to the DEFECTS module. It summarizes the lifetime of all reported defects. The lifetime of a defect begins when it is reported, and ends when it is closed.

103